Encrypted lists predictable difficulties and implementation needs
Bhavishya Desai wrote:
Now I would like to know(specifically) what are some other threats,which could effect this and any difficulties with implementation.
I imagine that the encryption and/or hash algorithms will change over time as encryption is broken and people figure out ways to create hash collisions. Therefore I'd imagine keys will change over time.
I'm guessing this carries some consequences:
the list key will change over time, therefore: list accepts posts under the old key for a limited time to give subscribers
- old subscribers need to be alerted to the new list key.
- MUAs need to be able to make use of the current list key (perhaps the
a chance to switch to using the new key).
subscribers' keys will change over time, therefore:
- there needs to be an easy way for a subscriber to get the list to accept posts under the subscriber's new key.
list archives raise interesting questions: the list archives will be encrypted too?
- if the goal is to never let a list post travel unencrypted, I guess
- each archive post will be signed+encrypted with the current list
key. Therefore each archive message will be decrypted and re-encrypted at each list key change? - yes: this implies a bigger and bigger decryption/re-encryption job at each list key change as a list archive grows. Presumably this task will become computationally intensive at some point, possibly beyond the scale of being done at all for very large mailing lists carrying large list posts.
- no: each archive post will be untouched after sending. Therefore
archives will feature a set of list posts signed+encrypted with whatever list key was current at the time that post was sent out. - thus old list public keys must be kept around and published forever so archive readers can decrypt/verify signatures of archived posts? - yes: this carries some questions about GPG policy (below). - no: how will this work?
GPG policy issues: the above raises questions about GPG:
- will GPG keep old encryption algorithms and hash algorithms around
(perhaps warning not to use them for new keys, and only using them for
decryption as needed)?
- or users going to need to retain old versions of GPG to handle verifying archive list posts old encrypted & signed with list keys (I don't see this working out well)?
- will GPG keep old encryption algorithms and hash algorithms around
(perhaps warning not to use them for new keys, and only using them for
decryption as needed)?
It's possible there is something fundamental about this entire process I do not understand and therefore I've completely missed something about this process which led me down a path I should not have gone down. If that's the case, please do let me know where I went wrong.
Thanks.
All of these proposals overlook significant known, current threats -- none of which they're capable of addressing, but some of which badly undercut the suggested approaches.
To list just one of those -- albeit a rather prominent one -- the Internet's population of hijacked systems (aka bots or zombies) continues to grow. This has been a growing problem for the last 15 years, e.g.:
Vint Cerf: one quarter of all computers part of a botnet
http://arstechnica.com/news.ars/post/20070125-8707.html
I have studied this issue extensively since 2002 and while I initially thought Cerf's estimate a bit high, further study and retropection suggests that it was probably about right. Extrapolating to the present day, one-quarter is probably still about right -- but of course the system population has grown massively in the interim.
The problem has recently been badly exacerbated by the rapid deployment of IoT devices whose security ranges between "laughable" and "non-existent". These in turn are quickly being utilized to compromise other systems. The problem is also being badly exacerbated by various governments and organized criminal operations which are developing, acquiring, and deploying zero-days as fast as they possibly can. And it's being further exacerbated by the increasingly sophisticated attacks conducted by less prominent and well-resourced adversaries; to put it another way, the average attacker now has access to means and methods far beyond what they had a decade ago. I rather suspect that "one quarter" will become "one third" in the next few years.
What all of this means is that once a list passes N members, where we can debate about N, the probability that at least one of those members has already been compromised even before they've joined the list starts rapidly increasing. Of course other factors may mitigate this: if all N members use exclusively open-source software, do not use freemail providers, do not use smartphones or IoT devices, etc., then the probability that one of them is compromised diminishes. (Worth noting that in a list constituted like this, encryption offers little additional security value, since its members are already doing the things most likely to avoid being compromised.) If on the other hand, some of the list members are using worst practices, then the probability that at least one is compromised will increase.
As I said, we can debate N -- and we can debate the probability. What is not open to debate is that this is real and significant. Very long experience running mailing lists and observing partial bot-generated activity from members strongly suggests, to give just one data point, that once N reaches "a few hundred" the probability approaches unity. However, I must emphasize that the word "partial" means that this is a significant UNDER-observation -- it's very clear that there is bot-generated activity I'm missing. Rather a lot of it, actually. So "a few hundred" is probably a highly optimistic estimate for N and its true value is probably much lower.
So even if the encryption works perfectly (which it won't) and it's deployed perfectly (which it won't be) and it's usable by everyone (which it won't be) and it plays nice with policies like attachment removal, signature removal, boilerplate addition, etc. (which it won't) and the encryption algorithm is perfect (which it won't be) and the encryption implementation is perfect (which it won't be) and all of this rather complex machinery works perfectly...it will all be rendered moot the moment one list member's system is compromised.
In other words, what you propose to build here is an extremely brittle system that's subject to total failure if even just a single endpoint fails. And there are *hundreds of millions* of endpoints that have already failed.
Thus, even assuming that the systems of encrypted-list members aren't specifically targeted, there is an uncomfortably high probability that the messages traversing it will be pre-compromised from the start.
And of course if those systems *are* specifically targeted, which of course is likely for people with use cases that suggest encrypted mailing lists, then the threat models changes and no longer consists of the normal level of attacks that all systems are subject to, but includes an elevated level of attacks that will target them in particular.
I think that this is an instance where a huge amount of well-intended design and development effort will result in a "solution" that cannot provide what it intends to because underlying circumstances prevent it. And -- having studied those underlying circumstances for a long time -- I can sadly report that the problem is getting worse and will continue to get worse, because (a) all of the various factors contributing to it are also getting worse and (b) there are no reasons for anyone to significantly invest in making it better.
---rsk
Rich Kulawiec wrote:
What all of this means is that once a list passes N members, where we can debate about N, the probability that at least one of those members has already been compromised even before they've joined the list starts rapidly increasing.
I understand there are more insecure devices on the Internet all the time and that's unfortunate, but I don't think it's avoidable. What do you suggest we do about this using Mailman (since this is Mailman-developers)?
Perhaps this means I don't understand what the goals of combining a mailing list and public key cryptography are (could someone please state what those goals are?). I took the goals to be the following:
make changes in messages easier to identify at the endpoints. So long as posters use strong cryptography methods and sign+encrypt their posts. Sure, a compromised device could change the message between the time someone writes their message and the time they sign+encrypt it, thus signing+encrypting an altered message. But we have that problem now and I don't see anyone calling for all research work to stop on any number of other things because of it. Also, for those without compromised devices who know what they're doing (a smaller set of people, as you point out) posts to mailing lists are likely easily changeable without most people being the wiser or having any ability to verify short of constantly asking others "Did you really post this?". Given how much en route data alteration is going on, it seems we ought to do something to at least let the user know the message they're looking at has a high likelihood of not being what was sent.
provide a practical means of using extant services (along with most of the UI expectations and technical advantages we've come to expect) to convey encrypted data and store encrypted data such that the plaintext of a message is not often exposed to any program server-side.
allow users to do some degree of identity confirmation. With what I've seen in this thread so far, poster identities are as verifiable as public key encryption and web of trust allow. If I see a post from someone I trust whom I know knows how to use, say, GPG correctly I then have increased confidence their post was signed by them. Currently, where lists are typically entirely plaintext, I understand it's quite easy for someone to post in someone else's name and email address and for any network operator (such as one's ISP) to alter the data en route.
But I could have the goals of this entire endeavor completely wrong, in which case I await correction.
On Wed, Mar 15, 2017 at 11:31:44PM -0500, J.B. Nicholson wrote:
I understand there are more insecure devices on the Internet all the time and that's unfortunate, but I don't think it's avoidable. What do you suggest we do about this using Mailman (since this is Mailman-developers)?
I suggest that Mailman do nothing, because even if it solves all the problems that it can solve, all it will do is provide a thin veneer of security/privacy on top of a thoroughly rotten foundation. Yes, there will be small, limited cases where it'll be able to deliver on its promises -- because every person involved is diligent and every device involved is secure -- but that's clearly not the way to bet.
Moreover, none of this comes for free: there is opportunity cost, complexity cost, maintenance cost, interoperability cost, etc. In my view, it's not worth incurring all these costs to implement something that we already know, today, right now, is not going to work in the contemporary Internet environment -- because it relies on underlying assumptions about endpoint security that almost certainly won't be true as soon as the deployment scale reaches modest numbers.
I think a better course of action is to recommend that those with the sort of requirements being articulated here not use mailing lists at all.
---rsk
On Thu, 16 Mar 2017 10:46:27 -0400 Rich Kulawiec <rsk@gsp.org> wrote:
I suggest that Mailman do nothing, because even if it solves all the problems that it can solve, all it will do is provide a thin veneer of security/privacy on top of a thoroughly rotten foundation. Yes, there will be small, limited cases where it'll be able to deliver on its promises -- because every person involved is diligent and every device involved is secure -- but that's clearly not the way to bet.
Even if not every device is secure, the difficulty, and likely cost, for an attacker to snoop on the communications is much greater for an encrypted mailing list is than for a non-encrypted one.
FWIW, I'm part of an NGO (Digital Society Switzerland, https://digitale-gesellschaft,ch ) which uses encrypted mailing lists for its internal communications. We use Schleuder ( https://schleuder.nadir.org/ ), which isn't perfect, but works fine for us.
Greetings, Norbert
Moreover, none of this comes for free: there is opportunity cost, complexity cost, maintenance cost, interoperability cost, etc. In my view, it's not worth incurring all these costs to implement something that we already know, today, right now, is not going to work in the contemporary Internet environment -- because it relies on underlying assumptions about endpoint security that almost certainly won't be true as soon as the deployment scale reaches modest numbers.
I think a better course of action is to recommend that those with the sort of requirements being articulated here not use mailing lists at all.
---rsk
Mailman-Developers mailing list Mailman-Developers@python.org https://mail.python.org/mailman/listinfo/mailman-developers Mailman FAQ: http://wiki.list.org/x/AgA3 Searchable Archives: http://www.mail-archive.com/mailman-developers%40python.org/ Unsubscribe: https://mail.python.org/mailman/options/mailman-developers/nb%40bollow.ch
Security Policy: http://wiki.list.org/x/QIA9
On Mar 15, 2017, at 09:47 PM, Rich Kulawiec wrote:
What all of this means is that once a list passes N members, where we can debate about N, the probability that at least one of those members has already been compromised even before they've joined the list starts rapidly increasing.
That assumes an open membership policy. Wouldn't much of this be mitigated with a closed subscription policy? I agree that the security of an encrypted remailer such as we're discussing is only as secure as its recipients. Yet there still may be value in encrypting the communication channels into and out of Mailman, even if that can be compromised at the end-points.
Does that make it worth it to add as a supported feature in the core? It depends. What I would be very interested in -at least as a first step- are ways to enable experimentation into such features by the addition of hooks and APIs that allow third-party plugins that could support this feature. Presumably such plugins would have utility for other use cases too.
I can sadly report that the problem is getting worse and will continue to get worse, because (a) all of the various factors contributing to it are also getting worse and (b) there are no reasons for anyone to significantly invest in making it better.
(b) is not necessarily true. There is lots of work going on to provide secure base platforms on which to implement IoT devices. Since that's mostly off topic for this list, I'll avoid plugging technologies here, but if you know me and my $dayjob, you can probably guess. Feel free to email me off-list if you want more information.
Cheers, -Barry
On Thu, Mar 16, 2017 at 12:47 PM, Rich Kulawiec <rsk@gsp.org> wrote:
I think that this is an instance where a huge amount of well-intended design and development effort will result in a "solution" that cannot provide what it intends to because underlying circumstances prevent it. And -- having studied those underlying circumstances for a long time -- I can sadly report that the problem is getting worse and will continue to get worse, because (a) all of the various factors contributing to it are also getting worse and (b) there are no reasons for anyone to significantly invest in making it better.
I'd submit that this is tantamount to saying "it's impossible to make a 100% secure system so why bother even trying".
Yes, a straight forward encrypted remailer such as is being discussed here has limited applicability, and will require careful implementation to function "properly" in the "real world". However I'm certainly aware of several organisations who would happily use such a platform, in a limited capacity for endpoints which have already been hardened. Nobody expects that when this work is completed that every mailman list in the world will suddenly become encrypted, because it won't and the VAST majority of them never will.
Anything that raises the barrier to entry for surveillance is a good thing IMO, if you go from being able to "passively sniffing plaintext off the wire" to having to "actively compromising an endpoint" that is still an improvement in your security posture and that is the way things change, incrementally.
Somebody else on the thread has raised the point of whether this should actually be implemented in core or if we should merely expose API to enable this functionality to be added plugin-wise, I think that's a debate that's worth having, but suggesting we write the whole thing off because "all these nodes are already compromised" is not remotely useful.
Barry Warsaw wrote:
Does that make it worth it to add as a supported feature in the core? It depends. What I would be very interested in -at least as a first step- are ways to enable experimentation into such features by the addition of hooks and APIs that allow third-party plugins that could support this feature. Presumably such plugins would have utility for other use cases too.
I was thinking this same thing; if Mailman had such hooks and an API allowing people to write plugins, everything discussed in this thread so far could be done with a plugin.
The overall goal for this plugin's API needs is to allow a Mailman admin or list owner to install this plugin and see the normal Mailman UI change slightly in relevant places to accommodate the needs of public keys.
Such a plugin for this task would need to:
have read access to each inbound message sent to a list where this plugin is supposed to be active. This is used to check a signature and possibly decrypt a message intended for list consumption when people send commands to the list encrypted with the list's public key.
somehow signal acceptance/rejection (to the next stage in Mailman's normal processing queue) of a message as coming from a subscriber (for lists where only subscribers may post) or a suitable other user (for lists where non-subscribers may also post).
get read/write access to each outbound message from a list before it is mailed to subscribers. This is where the list-signing + encryption would be done.
make a list command (even one that overrides the normal Mailman list commands) so one could do things with public keys relevant to this plugin -- subscribe to a list and supply either a pointer to or a copy of a public key at the same time in the same subscribe command, for example. I imagine there would be other commands that need similar public key additions to syntax.
be able to add fields, table columns, and form elements to an extensible web UI so key-related functions and displays can be integrated into extant displays -- subscribing on the web should include a prompt where one uploads one's public key or provides a URL where Mailman can fetch said public key. Getting a list of subscribers (if that's allowed per list policy) should include perhaps the key signature of each subscriber (if that subscriber is okay with publishing that info), the subscriber info for each list should include a setting where each user can decide if they want the list to publish the key signature (I think this is akin to current Mailman asking about hiding the user's identity in the subscriber's list), etc.
a way to place the list's public key in a file everyone could read at a predictable static URL. MUAs would use this URL to automatically fetch the list's public key.
a way to add to the documentation describing the plugin's added functionality; best would be to not have to replace extant documentation pages, but add pointers in current documentation pointing to new pages (which are installed in a place along side extant docs).
The plugin might also need an acceptable place on the server to store state data: a GPG keyring holding a copy of each subscriber & moderator's public key, the list's public and private keys, and some temporary space where the list can do work signing+encrypting messages (or perhaps GPG has a better way to handle this and the plugin should use whatever GPG provides).
I'm not familiar with Mailman's current hooks/API to know if any of this already exists, or is in-line with how Mailman generally does things.
Functionally, the list needs to inform all subscribers of the new list public key each time it changes. Maybe that is done with a header in a post with the list's public key's signature (saying the key with this signature was used to sign+encrypt this message) and it would be up to the MUA to get (and cache) the appropriate public key to verify this post?
There are also some policy issues I raised in another post on this thread regarding GPG's support for: obsolete hash/encryption algorithms, how this project wants to handle list archives (particularly re-signing+encrypting list archives). I think those issues should be considered. They seem to me to be important factors in how everyone interested in the list would use the list.
Finally, I'm not sure what the goals for the project were in the first place and the goals I posted are really only things I think are possible to implement.
Here's another of my guesses aimed at succinctly describing what this project will do: replace the subscription and posting filtering mechanism (currently based on email address string comparisons) with public key cryptography; if one sends in a post from any email address signed (or signed+encrypted, as per list policy?) with the right public key, that post is deemed to come from a subscriber. Other posts are handled in accordance with list policy.
If the project leader could speak to the project's goals, it might help us all understand what's in and out of scope for the project.
Also, thanks for letting someone completely unconnected with this project chime in out of nowhere.
On Thu, Mar 16, 2017 at 08:10:03PM +0100, Norbert Bollow wrote:
Even if not every device is secure, the difficulty, and likely cost, for an attacker to snoop on the communications is much greater for an encrypted mailing list is than for a non-encrypted one.
The difficulty is greater -- but not by much. Attackers have long since become extremely proficient at installing keystroke loggers and extracting credentials in order to compromise many other forms of communication. It's only an incremental, low-cost step for them to extend those techniques to encrypted mailing lists.
Now I'll grant that this is unlikely to happen immediately (except for intelligence agencies, who will be ready for this before it's deployed in the field). But one of the things that we've seen over and over again is that once attackers decide that a particular target (or kind of target) has value, they'll focus on it with surprisingly rapidity.
---rsk
On Thu, Mar 16, 2017 at 05:30:36PM -0400, Barry Warsaw wrote:
On Mar 15, 2017, at 09:47 PM, Rich Kulawiec wrote:
What all of this means is that once a list passes N members, where we can debate about N, the probability that at least one of those members has already been compromised even before they've joined the list starts rapidly increasing.
That assumes an open membership policy. Wouldn't much of this be mitigated with a closed subscription policy?
It *might* be.
The problem is that the list owner and other list members have no way to know. From their point of view, there is no way to know that whether the latest list member -- whether that's list member #8 or #7,221 -- is using a reasonably secure mail client on a reasonably secure operating system in a reasonably secure environment -- or whether they're reading list traffic on an iPhone that was fully compromised eight months ago. Morever, even if that newest list member is doing the former today, nothing from prevents them from doing the latter tomorrow.
(Yes, one could ask them not to, even make not doing so a condition of membership. That won't work. Somebody is going read email on their fridge or their car or their Android phone because they can, because they're lazy, because it's convenient, because they feel like it.)
It's thus impossible to (a) estimate the risk or (b) control the risk or (c) know when a full compromise has taken place, absent outside indicators.
That's a really bad combination to have in anything that's trying to be secure.
Yet there still may be value in encrypting the communication channels into and out of Mailman, even if that can be compromised at the end-points.
I agree.
I can sadly report that the problem is getting worse and will continue to get worse, because (a) all of the various factors contributing to it are also getting worse and (b) there are no reasons for anyone to significantly invest in making it better.
(b) is not necessarily true. There is lots of work going on to provide secure base platforms on which to implement IoT devices.
I'm aware of at least some of that, and I'd like to hope for the best.
But economic incentives being what they are, there is little motivation for vendors to bother. Moreover, many vendors are deliberately compromising end-user privacy and security (e.g., Vizio) because it's profitable to do so and the penalties, if any, are a mere slap-on-the-wrist. (I know you see a lot of this because of what you do; other folks might want to browse through TechDirt's ongoing partial catalog of IoT failures.)
My view -- at the moment, ask again tomorrow ;) -- is that so many IoT devices have been rushed to market with no consideration for security and privacy issues that the present situation is untenable. The best thing would be to recall *all* of them: all the smartphones, all the watches, all the TVs, everything...and start over. That's of course ludicrous and won't happen. Which means all those devices will persist in the field, joined by new ones in large numbers every day. And the slow backfill of fixes which *might*, in a vacuum, actually suffice, aren't going to be enough because so much of the rest of the IoT ecosystem is a mess.
In a relatively short time we've taken a system built to resist
destruction by nuclear weapons and made it vulnerable to toasters.
--- Jeff Jarmoc
---rsk
On 3/18/17 4:37 PM, Rich Kulawiec wrote:
On Thu, Mar 16, 2017 at 05:30:36PM -0400, Barry Warsaw wrote: ... It *might* be.
The problem is that the list owner and other list members have no way to know. From their point of view, there is no way to know that whether the latest list member -- whether that's list member #8 or #7,221 -- is using a reasonably secure mail client on a reasonably secure operating system in a reasonably secure environment -- or whether they're reading list traffic on an iPhone that was fully compromised eight months ago. Morever, even if that newest list member is doing the former today, nothing from prevents them from doing the latter tomorrow.
(Yes, one could ask them not to, even make not doing so a condition of membership. That won't work. Somebody is going read email on their fridge or their car or their Android phone because they can, because they're lazy, because it's convenient, because they feel like it.)
It's thus impossible to (a) estimate the risk or (b) control the risk or (c) know when a full compromise has taken place, absent outside indicators.
That's a really bad combination to have in anything that's trying to be secure.
Barry, I would say that the problem that is being attempted to solve is fundamentally impossible to do perfectly. It is impossible to distribute messages in a secure manner to a number of recipients that you don't have total control over their enviroment and KNOW that security is being maintained. Communication always has that sort of issue, if you tell someone something private, you need to be able to trust that they will keep it private, and their is always a risk that they will reveal the information intentionally or accidentally.
The question comes, is it better to provide a method that gets you part way to the goal, and risk a false sense of security, or to not provide any method at all.
The is comparable to the fact that we lock our homes and cars to keep them 'secure', even though we know that security isn't perfect. Doing so reduces that attack surface, but it is sometimes hard to estimate by how much.
Yes, if such a feature was added, adding a notice to remind people that the security provided is only as good as the weakest link among all the members of the list would make sense.
-- Richard Damo
On Sat, 18 Mar 2017 13:54:05 -0400 Rich Kulawiec <rsk@gsp.org> wrote:
On Thu, Mar 16, 2017 at 08:10:03PM +0100, Norbert Bollow wrote:
Even if not every device is secure, the difficulty, and likely cost, for an attacker to snoop on the communications is much greater for an encrypted mailing list is than for a non-encrypted one.
The difficulty is greater -- but not by much. Attackers have long since become extremely proficient at installing keystroke loggers and extracting credentials in order to compromise many other forms of communication. It's only an incremental, low-cost step for them to extend those techniques to encrypted mailing lists.
Now I'll grant that this is unlikely to happen immediately (except for intelligence agencies, who will be ready for this before it's deployed in the field). But one of the things that we've seen over and over again is that once attackers decide that a particular target (or kind of target) has value, they'll focus on it with surprisingly rapidity.
That is true, if the attacker already knows whose communications they want to snoop on. However one of the main benefit of using encrypted communications is in the area of making it much more expensive and politically risky for the attacker to determine which targets have value.
In the absence of encryption, that can be achieved by means of mass surveillance anywhere between the communications endpoints followed by (possibly AI-based) pattern analysis, at near-zero incremental cost and near-zero incremental risk per additional group that is subjected to such surveillance for reasons of its communications being possibly of interest to the attacker.
Greetings, Norbert
Rich Kulawiec writes:
What all of this means is that once a list passes N members, where we can debate about N, the probability that at least one of those members has already been compromised even before they've joined the list starts rapidly increasing.
This is true, but you've omitted (well, hidden in the gloss about "correct usage") the most important source of compromise: the subscribers themselves.
The most important use case I have in mind is not actually encrypted lists per se, but rather anonymized and encrypted lists. If done properly (I have not even attempted the analysis yet, and may not be competent to conduct it), it may be useful against garden-variety stalkers, requiring them to access server logs or the particular user's client (in which case they're presumably already done for) to de-anonymize.
Of course, in most cases a competent (I'm not even going so far as "advanced") and persistent attacker will be able to compromise a server as well (thus "garden-variety" = script kiddie).
If nothing else, I hope to educate a half-dozen GSoC applicants that "encryption is at best a 10% solution, and more likely 7%". :-/
And then Rich Kulawiec writes:
In particular, note that entities like Whisper and Signal have been, as I've said for years, peddling snake-oil. They cannot possibly deliver on their promises *even if they do everything they say they can do* because all of it is immediately and completely undercut if the underlying system is compromised.
Compromise of the underlying systems still typically requires cooperation from the user. Agreed, most people who casually think, "oh, an encrypted list! that's useful", are already busted. "Caveat: that word ('encrypted') doesn't mean what you think it means" should be warning enough (for our CYA, anyway).
This is about building a system that is known 0% secure from the start.
Is that 0% Kelvin or 0% Celsius? ;-)
I think, in the end, this will serve the community poorly -- because people who don't grasp the contemporary security landscape will deploy it, will rely on it, and will not understand that they lost the game before they even started to play it. This will have consequences.
People are already affronted that pretty much anybody who wants to can read their email, but that's the fact and it has consequences. I'm not sure we need to take responsibility for that. I'm willing to hear more about that, but not on the basis of strawmen like "0% secure".
As Zeynep Tufekci has been at pains to point out since the Guardian WhatsApp fiasco, that doesn't mean we shouldn't provide tools imposing additional effort on the bad guys for use by those who do understand the risks in this environment.
If you want to point out what use cases are broken from the word "go", even if that includes my preferred applications, fine. But I think it's reasonable to expect that a number of user groups capable of taking advantage do exist.
And he persists:
Moreover, none of this comes for free: there is opportunity cost, complexity cost, maintenance cost, interoperability cost, etc.
It's nearly free, because there are a lot of GSoC wannabes out there who think this is "way cool". Disabusing them of that notion may be the most important contribution of this project.
In my view, it's not worth incurring all these costs to implement something that we already know, today, right now, is not going to work in the contemporary Internet environment -- because it relies on underlying assumptions about endpoint security that almost certainly won't be true as soon as the deployment scale reaches modest numbers.
I thought that it already wasn't true even before we thought up this project, let alone when deployment can be expected to reach "modest scale"? Rich, calm down -- inconsistency is unbecoming of a security professional. ;-)
Note my reply to Barry: it's not unreasonable to expect that the odds turn against you as soon as you *pass three subscribers*. So, yes, we are going to have to document that to the users, and tell them they are going to have to make serious effort to turn the odds in their favor for *any* use case they may have in mind.
I hope it doesn't surprise anybody that despite being proponent of this project I'm quite sympathetic to Rich.
Barry Warsaw writes:
That assumes an open membership policy. Wouldn't much of this be mitigated with a closed subscription policy?
Not if the target membership isn't already paranoid. Remember, 20%-40% of devices are already compromised. Even at the low end, assuming uniform draws, with *three* members odds are *even* that one is compromised. Sure, your assumption is non-uniform, but it's not clear it's more optimistic -- suppose feeling paranoid enough to consider an encrypted list means the probability they're out to get you is *higher* than uniform?
And do you really think the proportion of truly tight-lipped potential subscribers is better than 80%?
I'm not saying there's nothing useful here, but there's no longer any such thing as "paranoia" when it comes to IoT (where "thing" includes anything connected, not just embedded devices).
I agree that the security of an encrypted remailer such as we're discussing is only as secure as its recipients. Yet there still may be value in encrypting the communication channels into and out of Mailman, even if that can be compromised at the end-points.
Unless you're talking about a resistence cell in a society that has been authoritarian for a few decades, I think we should assume that content is freely available to anybody who really wants it. It's not just John Podesta "who should know better", I've seen testimony recently from a security professional saying they'd clicked on a spearphish. They were in an isolated environment and they're pretty sure no harm was done, but they did click unintentionally. Jus' plain folks have no chance.
As I've said elsewhere, the only use case I'm seriously considering is encrypted + anonymized, so that you need to compromise (or subpoena) the server (or the exact sender) to identify senders of particular content. People smarter than me might be able to extend that area of applicability.
(b) is not necessarily true. There is lots of work going on to provide secure base platforms on which to implement IoT devices.
There's also active avoidance of the whole concept of security by major device (vs. platform) vendors. C'mon, guys, open telnet port on a router? Plus the reality that many devices produced by Chinese companies are almost certainly backdoored. It will be many years, maybe decades, before IoT means anything but "Internet of Threats".
I still think this is worth doing, both for the occasional use case, and for many of the reasons you give, but the applications are far more restricted than the GSoC applicants seem to think. :-/
Steve
On Sun, Mar 19, 2017 at 07:33:24AM -0400, Richard Damon wrote:
I would say that the problem that is being attempted to solve is fundamentally impossible to do perfectly. It is impossible to distribute messages in a secure manner to a number of recipients that you don't have total control over their enviroment and KNOW that security is being maintained. Communication always has that sort of issue, if you tell someone something private, you need to be able to trust that they will keep it private, and their is always a risk that they will reveal the information intentionally or accidentally.
[snip]
I think this (and the rest, which I've elided for brevity) is a very good statement of the problem.
I'll just add that -- in the general case, and quoting from the above, we already KNOW that security is *not* being maintained. It's not an open question, it's been answered very clearly for well over a decade.
(In the specific case, e.g., the right people using the right devices with the right knowledge and self-discipline: maybe. But there are not many of those cases and any of them can revert to the general case in seconds with one poor decision or perhaps even without one.)
---rsk
On 3/21/17 6:30 PM, Rich Kulawiec wrote:
On Sun, Mar 19, 2017 at 07:33:24AM -0400, Richard Damon wrote:
I would say that the problem that is being attempted to solve is fundamentally impossible to do perfectly. It is impossible to distribute messages in a secure manner to a number of recipients that you don't have total control over their enviroment and KNOW that security is being maintained. Communication always has that sort of issue, if you tell someone something private, you need to be able to trust that they will keep it private, and their is always a risk that they will reveal the information intentionally or accidentally. [snip]
I think this (and the rest, which I've elided for brevity) is a very good statement of the problem.
I'll just add that -- in the general case, and quoting from the above, we already KNOW that security is *not* being maintained. It's not an open question, it's been answered very clearly for well over a decade.
(In the specific case, e.g., the right people using the right devices with the right knowledge and self-discipline: maybe. But there are not many of those cases and any of them can revert to the general case in seconds with one poor decision or perhaps even without one.)
---rsk
The only way to keep a secret is not to tell it, as once you have told it, there is no way to keep the person you have told it from repeating it (intentionally, accidentally, or unknowingly). There are times (many of them) where it still makes sense to tell the secret and do your best to keep security.
It is similar to the fact that I know my house is not totally burglar proof. A determined person will be able to break into my home to take/place things, and if they were very determined, maybe even do so undetected. This doesn't mean I give up on security, I still lock my door, because it make me more secure than otherwise.
In the same way, an encrypted mailing list is not perfect, but it is a help, for the transmission of sensitive information that I wish to keep secret. It makes the transmission phase much more secure, and maybe helps a tiny bit on keeping the data at the end point secure. It should be know that, and prominently displayed in the documentation, that encrypted transmission doesn't help significantly with the security at the end points, and you need to evaluate your trust of the recipients to keep the information secure,
One big thing that I haven't seen in the discussion of this problem is exactly WHAT issue/problem this feature is intended to solve, There are several different problems that encryption can help with, each needing different sort of support from the software.
-- Richard Damon
Richard Damon writes:
One big thing that I haven't seen in the discussion of this problem is exactly WHAT issue/problem this feature is intended to solve, There are several different problems that encryption can help with, each needing different sort of support from the software.
Yup, and I've been telling the prospective interns that throughout.
But Rich Kulewiec is right that many or most of them can be eliminated right off. For example, I don't see any point in actual end-to-end encryption, as that would require everybody to know everybody's keys. OK, so we could create a PKI for each list, but that's effort over and above the encryption module, probably not appropriate for this GSoC. (It's been mentioned that algorithms are not forever, but similarly I think that's out of scope.) AFAICS, this means that root on the Mailman host is trusted, and needs to know the session key for each message. Perhaps you can avoid having to trust list owners, but when does that scenario actually make sense?
Note that GSoC is Google Summer of CODE - the reason for being cagey about what I'm thinking about as specs and use cases is not that the intern will be responsible for design. It's that the intern needs to understand that design and the use cases it serves in order to determine whether the implementation is correct, write tests, and so on, and I prefer to mentor Socratically. That doesn't mean anybody else needs to be coy! Feel free to put your ideas about use cases out there.
Also references to existing knowledge would be appreciated, such as "zero knowledge" schemes that might allow untrusted root on Mailman host, and the various implementations like SELS that have been mentioned.
Steve
Rich Kulawiec writes:
(In the specific case, e.g., the right people using the right devices with the right knowledge and self-discipline: maybe. But there are not many of those cases and any of them can revert to the general case in seconds with one poor decision or perhaps even without one.)
I'm with Richard Damon on this.
FYI: Encrypted lists *are* occasionally requested. Even if we are forced to give up, we need to investigate this, and convince ourselves that there really are NO valid use cases so we can make the case that it's a bad idea to those users. I note that several other projects have created variations on encrypted lists. It's reasonable for us to want to learn what they are and are not good for in order to converse with users about their requests for encrypted lists.
You have my permission to say "I told you so" if we're forced to abandon this as a silly idea. Until then, I think you're wasting bandwidth in opposing it from the get-go. Once again, I'd be happy to hear where our threat models are deficient once we start to talk about them. But none of the proposals so far have really identified a threat model let alone a corresponding use case! So there's nothing to criticize yet.
Regards, Steve
On 03/22/2017 04:06 PM, Stephen J. Turnbull wrote:
Rich Kulawiec writes:
(In the specific case, e.g., the right people using the right devices with the right knowledge and self-discipline: maybe. But there are not many of those cases and any of them can revert to the general case in seconds with one poor decision or perhaps even without one.)
I'm with Richard Damon on this.
FYI: Encrypted lists *are* occasionally requested. Even if we are forced to give up, we need to investigate this, and convince ourselves that there really are NO valid use cases so we can make the case that it's a bad idea to those users. I note that several other projects have created variations on encrypted lists. It's reasonable for us to want to learn what they are and are not good for in order to converse with users about their requests for encrypted lists.
You have my permission to say "I told you so" if we're forced to abandon this as a silly idea. Until then, I think you're wasting bandwidth in opposing it from the get-go. Once again, I'd be happy to hear where our threat models are deficient once we start to talk about them. But none of the proposals so far have really identified a threat model let alone a corresponding use case! So there's nothing to criticize yet.
A use case I have in mind is for mailing lists that: a) have a relatively low number of subscribers. b) have or can establish some sort of a PGP web-of-trust, in a sense that a subscriber has to trust the list owner's key or the list key, and that the list owner has to trust the subscriber's key when accepting his subscription. (this is due to fairly strong assumptions about attacker's abilities) c) can be anonymous (apart from obvious information stemming from b) and no other info about subscriber's or sender's identity has to be disclosed to other subscribers than what is now with anonymous lists. (see Technical details in my proposal)
This is what my proposal is aiming for and what I think is a realistic application of encrypted mailing lists.
-Jan
On 03/22/2017 04:02 PM, Stephen J. Turnbull wrote:
Also references to existing knowledge would be appreciated, such as "zero knowledge" schemes that might allow untrusted root on Mailman host, and the various implementations like SELS that have been mentioned.
In my proposal [1 or 2], I concluded that a SELS like proxy encryption scheme doesn't apply well to Mailman's existing infrastructure as well as the stated requirements in the project idea.
-Jan
(Just to note, I have recently updated my proposal, make sure to have the latest version) [1]: https://neuromancer.sk/page/gsoc/mailman#technical-details [2]: https://neuromancer.sk/static/mailman.pdf
On Mar 21, 2017, at 07:27 PM, Stephen J. Turnbull wrote:
Not if the target membership isn't already paranoid. Remember, 20%-40% of devices are already compromised. Even at the low end, assuming uniform draws, with *three* members odds are *even* that one is compromised.
Is anybody even aware of any mainstream mobile email readers that support encryption? Or webmail interfaces? I seem to remember a recent announcement that Gmail will soon be supported plugins that could be used to read and send GPG/PGP encrypted messages.
An encrypted mailing list won't help you much regardless of the compromised nature of your device if you can't even read the encrypted messages. ;)
-Barry
On Mar 23, 2017, at 12:06 AM, Stephen J. Turnbull wrote:
FYI: Encrypted lists *are* occasionally requested.
Another possible use case would be attempting to prevent the wholesale compromise of email storage. Meaning, if you keep your email on some external server, and that server is compromised, if those messages are encrypted, then at least they likely will be very difficult for the attacker to decrypt since the keys won't likely be colocated with the emails. Sure you can probably phish specific individuals, but it won't be "crack the server and now you have a million secret messages". It's the same as with encrypted person-to-person messages (which almost no one uses because Reasons).
You have my permission to say "I told you so" if we're forced to abandon this as a silly idea. Until then, I think you're wasting bandwidth in opposing it from the get-go. Once again, I'd be happy to hear where our threat models are deficient once we start to talk about them. But none of the proposals so far have really identified a threat model let alone a corresponding use case! So there's nothing to criticize yet.
I should state for the record that my personal interest in this feature isn't so much encrypted mailing lists per se, but the architectural and design pressure it will put on Mailman 3, and our responses to that. Encrypted lists are the kinds of things I want to make possible with Mailman 3, so the APIs, hooks, configurations, and plugins that would be needed to implement encrypted lists (assuming, IMHO correctly that they won't be integrated into the core) will be of use to others who want to do Interesting Things with mailing lists.
Cheers, -Barry
On Wed, 22 Mar 2017 21:15:46 -0400 Barry Warsaw <barry@list.org> wrote:
Is anybody even aware of any mainstream mobile email readers that support encryption?
One of my friends uses K9 on his Samsung mobile phone; it works fine for him, allowing him to exchange GPG-encrypted emails with me.
If you need more examples, I could ask on the (encrypted, Schleuder-based) internal mailing list of Digital Society Switzerland.
An encrypted mailing list won't help you much regardless of the compromised nature of your device if you can't even read the encrypted messages. ;)
Personally I prefer to use my mobile phone for plain old telephony and SMS only; I lovingly refer to it as "my dumbphone". I believe that the risk of compromise is much, much lower than for Internet-connected smartphones, even without taking any special security precautions in relation to the dumbphone.
By contrast, a laptop running Debian GNU/Linux can be kept secure enough for my needs without too much trouble even while communicating via the Internet with somewhat-trusted as well as with totally untrusted third parties.
That is good enough for me, because I don't have a need to be online continually while awake; it suffices for me to be able to ensure that I'll be able to live up to a commitment of having promised 24hrs turn-around reaction time for anything that may be relatively urgent. A laptop computer is sufficiently mobile for ensuring that, as long as I don't travel to really remote areas.
Greetings, Norbert
On 2017-03-22 6:27 PM, Barry Warsaw wrote:
I should state for the record that my personal interest in this feature isn't so much encrypted mailing lists per se, but the architectural and design pressure it will put on Mailman 3, and our responses to that. Encrypted lists are the kinds of things I want to make possible with Mailman 3, so the APIs, hooks, configurations, and plugins that would be needed to implement encrypted lists (assuming, IMHO correctly that they won't be integrated into the core) will be of use to others who want to do Interesting Things with mailing lists.
I just want to pull this out and make sure students have seen it, because I know a lot of folk will see a discussion like the one we've had on the challenges and assume that "this is hard to do" means "this project is a waste of time and I should find another org to work with." And that's not true at all!
As Barry says, this is an interesting project for Mailman for many reasons that have nothing to do with encryption and everything to do with how to build a moderately complex system that hooks into Mailman. Those reasons are still valid regardless of how you feel about encryption. :)
More than that, GSoC's goal is generally *not* that students produce perfect workable code (you can sort of tell this by the fact that the code doesn't even have to be used by the organization providing mentors!) but rather to get students experience working in open source communities, learning new architectures, and working on real-world problems. Again, even if everyone everywhere is compromised, this is an interesting enough problem that meets all those other needs. When Stephen floated this idea, I thought it was great because on top of learning about mailman, it gives students a chance to work in a challenging security problem as well. Getting encryption even partially right is a thing that developers struggle with (my day job involves helping open source dev teams understand security) and a little experience tends to go a long way when it comes to future understanding of threat models and other key concepts in defining security.
It's also worth noting that one of the reasons this was chosen was also that this *isnt* an urgently-needed release-blocking feature for Mailman, but rather a "nice to have" that someone can work on without quite as much pressure. Again, this is great for a student project in ways that it might not be ideal for a core developer.
Basically, don't just read "Why Johnny Can't Encrypt" [1] and assume the problem of encrypted is dead and never will be solved. As my PhD supervisor used to say "you should look at impossible, insolvable problems as research opportunities rather than dead ends. That's science." :)
[1] https://www.usenix.org/conference/8th-usenix-security-symposium/why-johnny-c...
Terri Oda writes:
Basically, don't just read "Why Johnny Can't Encrypt" [1] and assume the problem of encrypted is dead and never will be solved.
But you might want also to read JWZ's blog on Signal[2] *and all the comments* to see why threat models matter, and how subtle it can be. (If you're not going to read a large fraction of the comments, don't bother, nothing to see here.) It's the disagreement among smart, well-intentioned -- if a bit mouthy in JWZ's case ;-) -- people that's of interest here. AFAICT, in the whole thread there are no two individuals who agree on what threat model this particular encrypted messaging system should try to address!
[1] https://www.usenix.org/conference/8th-usenix-security-symposium/why-johnny-c...
[2] https://www.jwz.org/blog/2017/03/signal-leaks-your-phone-number-to-everyone-...
On Sun, Mar 19, 2017 at 06:14:22PM +0100, Norbert Bollow wrote:
That is true, if the attacker already knows whose communications they want to snoop on. However one of the main benefit of using encrypted communications is in the area of making it much more expensive and politically risky for the attacker to determine which targets have value.
The attacker (for many values of "attacker") is and will be particularly interested in communications that are encrypted -- because they'll stand out. Granted, this will diminish as more communications become encrypted, but for the forseeable future, anyone using encryption or similar privacy measures will be targeted:
https://www.wired.com/2014/07/nsa-targets-users-of-privacy-services/
I agree with you that encryption makes it more expensive, and that's an argument for deploying it, but I don't agree that it's politically risky: there are no appreciable consequences for anyone engaging in this. Even at the commercial level (e.g., Verizon's insertion of unblockable cookies in order to conduct surveillance) there are no appreciable consequences for any violation of user privacy or security -- merely inconsequential slap-on-the-wrist fines and then it's right back to business as usual.
In the absence of encryption, that can be achieved by means of mass surveillance anywhere between the communications endpoints followed by (possibly AI-based) pattern analysis, at near-zero incremental cost and near-zero incremental risk per additional group that is subjected to such surveillance for reasons of its communications being possibly of interest to the attacker.
I almost entirely agree with you on this, but want to point out that if an attacker has compromised an endpoint, they can stop there: there's no need to worry about the rest. And endpoints are already compromised by the hundreds of millions, with more every day. (And as more endpoints become part of the IOT, the rate of compromise will increase drastically.) I think it's quite reasonable to extrapolate a billion compromised endpoints sometime in the next couple of years. (I also think that in a couple of years I'll shake my head at how much of an underestimate that turned out to be.)
So if it becomes desirable or profitable for the new owners of those systems to pay specific attention to encrypted mailing list traffic, they will...and probably much quicker than anyone anticipates. They won't get it right the first or second time, just like they didn't get botnet C&C organization right the first or second time -- but it won't take them long to learn.
Thus the target end user population for encrypted mailing lists looks something like this:
Nobody using freemail providers -- these fall into two categories:
those that are owned and those that are going to be owned.
Nobody using webmail -- webmail implementations have a long
and sad history of serious security issues. And "browser
security" is often an oxymoron.
Nobody using Windows, MacOS, Android, or iOS. There are already
too many exploits on the table to keep track of, and there can be
no doubt that these are only a fraction of the total: many more
are held by security researchers, vulnerability brokers,
intelligence agencies, etc. And Linux probably should be
added to that list in the near future, as its increasing
deployment has clearly made it an attractive target. (Nod to
the past week's releases by the Shadow Brokers, which are surely
the tip of the tip of the iceberg.)
Nobody with poor email habits, e.g., top-posters, full-quoters,
people who use HTML markup. (Since these undercut encryption,
sometimes rather badly.)
Nobody using the IOT to send or receive email, e.g., their car,
which was very likely pre-compromised at the factory.
That doesn't leave a lot of people.
I'm not saying "don't do it". As an intellectual exercise and a development challenge, it's interesting. I'm saying "make sure -- if people are thinking about deploying this -- that they understand that they have almost no chance of making this work as intended in the real world."
---rsk
On Mon, 17 Apr 2017 19:22:52 -0400 Rich Kulawiec <rsk@gsp.org> wrote:
On Sun, Mar 19, 2017 at 06:14:22PM +0100, Norbert Bollow wrote:
That is true, if the attacker already knows whose communications they want to snoop on. However one of the main benefit of using encrypted communications is in the area of making it much more expensive and politically risky for the attacker to determine which targets have value.
The attacker (for many values of "attacker") is and will be particularly interested in communications that are encrypted -- because they'll stand out. Granted, this will diminish as more communications become encrypted, but for the forseeable future, anyone using encryption or similar privacy measures will be targeted:
https://www.wired.com/2014/07/nsa-targets-users-of-privacy-services/
The NSA scans just about all unencrypted email communications anyway.
So not encrypting communications certainly is not a viable strategy for ordinary (i.e. non-criminal) people who would like to not have their emails scanned by the NSA. If the NSA were to make the greatest possible efforts in attempts to also scan as many encrypted communications as they can, that could, if they were to achieve 100% success in that regard, in the worst case only bring the level of their privacy violations of encrypted communications up to the level at which they violate privacy for unencrypted communications.
Another important point is that not all attackers have capabilities of attacking encrypted communications. An important class of attackers is technically relatively unsophisticated criminals going after relatively soft targets of opportunity.
Nota bene, I'm only talking about the communications of non-criminals here. I'm not interested in discussing whether it might be a viable strategy for terrorists or other criminals to intentionally not technically encrypt their communications, in order to attempt to make those communications not stand out among the mass of unencrypted communications among innocents.
I agree with you that encryption makes it more expensive, and that's an argument for deploying it, but I don't agree that it's politically risky: there are no appreciable consequences for anyone engaging in this.
I can assure you that "Digital Society Switzerland", a Swiss NGO where I happen to be serving as president, would be most delighted to have concrete evidence of even a single concrete example of a foreign intelligence service having broken into an innocent person's computer or other communication device in Switzerland for purposes of spying on encrypted communications. There are multiple ways in which we would be most eager to exploit this politically, with reputational side effects on the guilty state actor that they would certainly prefer to avoid.
Now if the foreign intelligence services deploys their intrusion capability only against terrorists and their close associates, we (Digital Society Switzerland) are not likely to get any evidence of that, and even if we got evidence of such activities, that would not help us politically.
But if it should happen that they start mass surveillance of end-to-end encrypted email communications, that would include our internal communications, so the foreign intelligence service would need to compromise a significant number of the devices that we use for communicating, and chances are that one of us would notice that something is wrong, and get the issue addressed in a professional that involves forensic analysis.
Even in the case of a foreign state actor that does not care about any diplomatic repercussions, or a foreign state actor that likes to be intentionally provocative, there would be a heavy cost to them if they were to make widespread attacks and these attacks were made widely known, because in such a case the security vulnerabilities that they exploit would become well-publicized, and many of the more interesting surveillance targets would secure their devices against those attacks.
Even at the commercial level (e.g., Verizon's insertion of unblockable cookies in order to conduct surveillance) there are no appreciable consequences for any violation of user privacy or security -- merely inconsequential slap-on-the-wrist fines and then it's right back to business as usual.
Unblockable cookies are quite different technically as well as emotionally/politically from the kinds of attacks that we're discussing here.
In the absence of encryption, that can be achieved by means of mass surveillance anywhere between the communications endpoints followed by (possibly AI-based) pattern analysis, at near-zero incremental cost and near-zero incremental risk per additional group that is subjected to such surveillance for reasons of its communications being possibly of interest to the attacker.
I almost entirely agree with you on this, but want to point out that if an attacker has compromised an endpoint, they can stop there: there's no need to worry about the rest. And endpoints are already compromised by the hundreds of millions, with more every day. (And as more endpoints become part of the IOT, the rate of compromise will increase drastically.) I think it's quite reasonable to extrapolate a billion compromised endpoints sometime in the next couple of years. (I also think that in a couple of years I'll shake my head at how much of an underestimate that turned out to be.)
All of that is true, although of course even when an endpoint is compromised by one attacker, it may still be inaccessible to other adversaries (e.g. because some of the other adversaries will be less sophisticated, or because the first attacker's rootkit closes the security hole through which they came in, or because the second attacker's rootkit fails to work because it assumes an unmodified system and that assumption is wrong because of the presence of the first attacker's rootkit).
So if it becomes desirable or profitable for the new owners of those systems to pay specific attention to encrypted mailing list traffic, they will...and probably much quicker than anyone anticipates. They won't get it right the first or second time, just like they didn't get botnet C&C organization right the first or second time -- but it won't take them long to learn.
Thus the target end user population for encrypted mailing lists looks something like this:
Nobody using freemail providers -- these fall into two categories: those that are owned and those that are going to be owned.
Nobody using webmail -- webmail implementations have a long and sad history of serious security issues. And "browser security" is often an oxymoron.
Nobody using Windows, MacOS, Android, or iOS. There are already too many exploits on the table to keep track of, and there can be no doubt that these are only a fraction of the total: many more are held by security researchers, vulnerability brokers, intelligence agencies, etc. And Linux probably should be added to that list in the near future, as its increasing deployment has clearly made it an attractive target. (Nod to the past week's releases by the Shadow Brokers, which are surely the tip of the tip of the iceberg.)
Nobody with poor email habits, e.g., top-posters, full-quoters, people who use HTML markup. (Since these undercut encryption, sometimes rather badly.)
Nobody using the IOT to send or receive email, e.g., their car, which was very likely pre-compromised at the factory.
That doesn't leave a lot of people.
This analysis doesn't correspond at all to the real-life use case that I'm familiar with, of an encrypted mailing list that we're using quite successfully.
We're not using it with the intention of creating an illusion of the traffic of that mailing list thereby achieving a high degree of protection of confidentiality. We're quite aware that that is not the case. In fact, everyone is aware of how easy it is to get onto that mailing list, a process that does not involve any serious vetting besides (due to the encrypted nature of the list) the fact that prospective subscribers are required to provide an OpenPGP public key. It's almost an open list, with a correspondingly low expectation of confidentiality.
More confidential exchanges are always by off-list encrypted email.
The encrypted mailing list nevertheless plays a very significant role in allowing those off-list encrypted email conversations to happen, by ensuring that all participants in the overall group continually have the capability of sending and reading encrypted email, and by providing a well-defined way for obtaining the public keys of any participants of the overall group (we can obtain them from the mailing list server).
I'm not saying "don't do it". As an intellectual exercise and a development challenge, it's interesting. I'm saying "make sure -- if people are thinking about deploying this -- that they understand that they have almost no chance of making this work as intended in the real world."
As far as I am able to tell, the encrypted list that I mentioned is working as intended for us.
I do however agree with rsk's analysis in so far as I agree that his arguments show that if one's intention with an encrypted mailing list were to thereby make the communications of just about any large group of people in some sense very secure, that would be an unrealistic intention, for which there'd be almost no chance of making it work in the real world.
Greetings, Norbert
After I wrote most of this, I see Norbert covered some of the same points, but from the point of view of his specific use case. So I'm just going to send despite a bit of redundancy.
Rich Kulawiec writes:
Granted, this will diminish as more communications become encrypted, but for the forseeable future, anyone using encryption or similar privacy measures will be targeted:
https://www.wired.com/2014/07/nsa-targets-users-of-privacy-services/
The people I know (and I don't know any so it's no use trying to figure out who they are :-/ ) who develop encrypted communication systems seem to disagree with you about the use cases for this: they do use encrypted mail.
I think about it this way: as you will undoubtedly point out, they know they're targeted, and they have the skills and motivation (see "know they're targeted") to do something about endpoint security. So given that their perceived threats aren't in the endpoints, they apparently see encrypted channels as useful.
In many of the use cases that have been discussed in the past, we are looking at lists where the users have *specific* threats they're worried about, such as (ex-)spouses and other stalkers, employers, and public insecure wireless (since it's a mailing list, you need to worry about whether your correspondents -- whose identities you may not know -- are all using VPNs etc). While I agree with your assessment of "a billion pwned devices on the Internet of Threats[tm]", I don't necessarily think that any given user's threat is going to be a relevant pwner. (And in fact we already know that they compete with each other, and I see no reason for that to change. Sure, the FSB and NSA will be the biggest players, but they also have some incentive not to advertise openly even on the "dark web".)
Yes, users need to be aware of the issue that their personal endpoint is not that hard to hack, and that if that happens it's not the ML's fault that their enemy is reading their "secure" mailing list posts. They also need to be aware that *anybody* subscribing is a passive threat (by "passive" I mean that if that person's endpoint is hacked, who knows who might have access to cleartext). For that reason I am of the opinion that encrypted mailing lists should be anonymous by default.
So if it becomes desirable or profitable for the new owners of those systems to pay specific attention to encrypted mailing list traffic, they will...and probably much quicker than anyone anticipates.
I'm not going to anticipate how long it will take, I'm going to assume that encrypted traffic will attract attention, including attempts to crack it just for the lulz, from the get-go.
But I suspect that the really skilled and dangerous folks won't bother targeting encrypted traffic. They'll just read everything anyway, maybe sift through it with text mining tools. I suppose such tools might be instructed to check for encrypted traffic just to save cycles by not grepping the encrypted parts, and that could lead to lists of encrypting endpoints and specific targeting as you suggest.
Thus the target end user population for encrypted mailing lists looks something like this:
You're clearly assuming we all count APT28 among our enemies. I don't think so! Yes, I assume that if a "private sector Echelon" indeed comes into being there will be a market for its services and any previously collected information it preserves. I'm not sure garden-variety snakes in the grass will be able to afford it, though, and of course it will be a "dark web" thing, so hazardous to the health of would-be users.
In other words, I agree to an extent with Norbert that this *will* increase the cost of targeting list traffic and provide a certain amount of "political" deterrent (in the sense of being on the dark web).
I'm not saying "don't do it". As an intellectual exercise and a development challenge, it's interesting.
In other words, it should be a GSoC project. It is, or at least we're hoping it will be. :-)
I'm saying "make sure -- if people are thinking about deploying this -- that they understand that they have almost no chance of making this work as intended in the real world."
Yeah, well, good luck on that. 62 million Trump voters will believe whatever the Breitbart review says. :-(
Steve
participants (9)
-
Barry Warsaw
-
J.B. Nicholson
-
Jan Jancar
-
Morgan Reed
-
Norbert Bollow
-
Rich Kulawiec
-
Richard Damon
-
Stephen J. Turnbull
-
Terri Oda