Discussion:
[tor-dev] Temporary hidden services
Michael Rogers
2018-09-27 12:43:54 UTC
Permalink
Hi all,

The Briar team is working on a way for users to add each other as
contacts by exchanging links without having to meet in person.

We don't want to include the address of the user's long-term Tor hidden
service in the link, as we assume the link may be observed by an
adversary, who would then be able to use the availability of the hidden
service to tell whether the user was online at any future time.

We're considering two solutions to this issue. The first is to use a
temporary hidden service that's discarded after, say, 24 hours. The
address of the temporary hidden service is included in the link. This
limits the window during which the user's activity is exposed to an
adversary who observes the link, but it also requires the contact to use
the link before it expires.

The second solution is to include an ECDH public key in the link,
exchange links with the contact, and derive a hidden service key pair
from the shared secret. The key pair is known to both the user and the
contact. One of them publishes the hidden service, the other connects to
it. They exchange long-term hidden service addresses via the temporary
hidden service, which is then discarded.

The advantage of the second solution is that the user's link is static -
it doesn't expire and can be shared with any number of contacts. A
different shared secret, and thus a different temporary hidden service,
is used for adding each contact.

But using a hidden service in such a way that the client connecting to
the service knows the service's private key is clearly a departure from
the normal way of doing things. So before pursuing this idea I wanted to
check whether it's safe, in the sense that the hidden service still
conceals its owner's identity from the client.

Attacks against the availability of the service (such as uploading a
different descriptor) are pointless in this scenario because the client
is the only one who would connect to the service anyway. So I'm just
interested in attacks against anonymity.

Can anyone shed any light on this question? Or is this way of using
hidden services too disgusting to even discuss? :-)

Thanks,
Michael
Chad Retz
2018-09-27 19:02:01 UTC
Permalink
I am no expert here, but I'm confused by "the client connecting to the
service knows the service's private key". Why not just create an onion
service (per contact) and then use the client authentication feature
to ensure they share the same secret? Client auth is built in to
discovery and from what I understand, retains anonymity of the owner.
Also, why derive the hidden service key pair from the shared secret
instead of having the sender provide the address based on a privately
derived key pair? To your primary question, I to would like to know
that, given the private key of an onion service, the anonymity of the
original publisher is maintained. I would guess that it is (granted
they can overwrite the descriptor and take it over and what not).

Chad
Post by Michael Rogers
Hi all,
The Briar team is working on a way for users to add each other as
contacts by exchanging links without having to meet in person.
We don't want to include the address of the user's long-term Tor hidden
service in the link, as we assume the link may be observed by an
adversary, who would then be able to use the availability of the hidden
service to tell whether the user was online at any future time.
We're considering two solutions to this issue. The first is to use a
temporary hidden service that's discarded after, say, 24 hours. The
address of the temporary hidden service is included in the link. This
limits the window during which the user's activity is exposed to an
adversary who observes the link, but it also requires the contact to use
the link before it expires.
The second solution is to include an ECDH public key in the link,
exchange links with the contact, and derive a hidden service key pair
from the shared secret. The key pair is known to both the user and the
contact. One of them publishes the hidden service, the other connects to
it. They exchange long-term hidden service addresses via the temporary
hidden service, which is then discarded.
The advantage of the second solution is that the user's link is static -
it doesn't expire and can be shared with any number of contacts. A
different shared secret, and thus a different temporary hidden service,
is used for adding each contact.
But using a hidden service in such a way that the client connecting to
the service knows the service's private key is clearly a departure from
the normal way of doing things. So before pursuing this idea I wanted to
check whether it's safe, in the sense that the hidden service still
conceals its owner's identity from the client.
Attacks against the availability of the service (such as uploading a
different descriptor) are pointless in this scenario because the client
is the only one who would connect to the service anyway. So I'm just
interested in attacks against anonymity.
Can anyone shed any light on this question? Or is this way of using
hidden services too disgusting to even discuss? :-)
Thanks,
Michael
_______________________________________________
tor-dev mailing list
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
meejah
2018-09-28 01:51:50 UTC
Permalink
In this realm, perhaps https://github.com/warner/magic-wormhole could be
used to construct a solution?

Some person has to "initiate" the wormhole and pass the (short) code to
the other person -- this could be via some established channel, like
"over the phone" (the codes exchanged are only usable once)...
--
meejah
Michael Rogers
2018-10-01 11:15:21 UTC
Permalink
Hi Chad,
Post by Chad Retz
I am no expert here, but I'm confused by "the client connecting to the
service knows the service's private key". Why not just create an onion
service (per contact) and then use the client authentication feature
to ensure they share the same secret? Client auth is built in to
discovery and from what I understand, retains anonymity of the owner.
Creating an onion service per contact would be a possibility, and
although some information about the user's online and offline periods
would still be leaked to an adversary who observed the link, via the
publication and renewal of the hidden service descriptor, I think you're
right that client auth would reduce the leakage by preventing the
adversary from connecting to the service to test whether it was online
at any given moment. Thanks for the suggestion!
Post by Chad Retz
Also, why derive the hidden service key pair from the shared secret
instead of having the sender provide the address based on a privately
derived key pair?
I admit it's a weird way of doing things. The shared secret approach
allows the user to use the same link for all contacts over a long
period, without exposing a long-term hidden service address to an
adversary who observes the links.

This has some advantages, such as being able to publish your link via
multiple channels (email sig, social media profile, etc) that recipients
can check to increase their confidence that they've received your
authentic link.

On the other hand, time-limited or single-use links are less likely to
become surveillance selectors in their own right, and the "key
fingerprints on business cards" pattern has never really caught on. So
there are pros and cons to both approaches.

Cheers,
Michael
Post by Chad Retz
To your primary question, I to would like to know
that, given the private key of an onion service, the anonymity of the
original publisher is maintained. I would guess that it is (granted
they can overwrite the descriptor and take it over and what not).
Chad
Post by Michael Rogers
Hi all,
The Briar team is working on a way for users to add each other as
contacts by exchanging links without having to meet in person.
We don't want to include the address of the user's long-term Tor hidden
service in the link, as we assume the link may be observed by an
adversary, who would then be able to use the availability of the hidden
service to tell whether the user was online at any future time.
We're considering two solutions to this issue. The first is to use a
temporary hidden service that's discarded after, say, 24 hours. The
address of the temporary hidden service is included in the link. This
limits the window during which the user's activity is exposed to an
adversary who observes the link, but it also requires the contact to use
the link before it expires.
The second solution is to include an ECDH public key in the link,
exchange links with the contact, and derive a hidden service key pair
from the shared secret. The key pair is known to both the user and the
contact. One of them publishes the hidden service, the other connects to
it. They exchange long-term hidden service addresses via the temporary
hidden service, which is then discarded.
The advantage of the second solution is that the user's link is static -
it doesn't expire and can be shared with any number of contacts. A
different shared secret, and thus a different temporary hidden service,
is used for adding each contact.
But using a hidden service in such a way that the client connecting to
the service knows the service's private key is clearly a departure from
the normal way of doing things. So before pursuing this idea I wanted to
check whether it's safe, in the sense that the hidden service still
conceals its owner's identity from the client.
Attacks against the availability of the service (such as uploading a
different descriptor) are pointless in this scenario because the client
is the only one who would connect to the service anyway. So I'm just
interested in attacks against anonymity.
Can anyone shed any light on this question? Or is this way of using
hidden services too disgusting to even discuss? :-)
Thanks,
Michael
_______________________________________________
tor-dev mailing list
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
_______________________________________________
tor-dev mailing list
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Nick Mathewson
2018-09-28 01:40:33 UTC
Permalink
Post by Michael Rogers
Hi all,
The Briar team is working on a way for users to add each other as
contacts by exchanging links without having to meet in person.
We don't want to include the address of the user's long-term Tor hidden
service in the link, as we assume the link may be observed by an
adversary, who would then be able to use the availability of the hidden
service to tell whether the user was online at any future time.
We're considering two solutions to this issue. The first is to use a
temporary hidden service that's discarded after, say, 24 hours. The
address of the temporary hidden service is included in the link. This
limits the window during which the user's activity is exposed to an
adversary who observes the link, but it also requires the contact to use
the link before it expires.
The second solution is to include an ECDH public key in the link,
exchange links with the contact, and derive a hidden service key pair
from the shared secret. The key pair is known to both the user and the
contact. One of them publishes the hidden service, the other connects to
it. They exchange long-term hidden service addresses via the temporary
hidden service, which is then discarded.
Here's a third idea to think about: What if you use the same key
derivation trick we use in v3 onion services?

That is, every user could have a long term private key x and a public
key g^x. If the user with key g^x wants to allow a user with g^y to
meet them with them, they derive s=h( g^xy) as the shared secret...
but instead of using s as a private key, they do the key derivation
trick, so that the single-use public key is derived as (g^x)*(g^s) =
g^(x+s), and the single use secret key is derived as (x + s). This
way, each of them winds up with a private/public keypair for use with
the other; each user is the only one that knows their private key; and
the two of them are the only ones who can derive the public key.

You could do this in Tor's v3 onion-service design as-is: Just put
h(g^xy) as the the "optional secret s" input when deriving the daily
key.

For more info on this key derivation mechanism, see rend-spec-v3.txt,
appendix A.

(Warning: I am not a cryptographer; I haven't thought about this idea
very hard yet.)

peace,
--
Nick
Michael Rogers
2018-10-01 11:37:42 UTC
Permalink
Post by Nick Mathewson
Post by Michael Rogers
Hi all,
The Briar team is working on a way for users to add each other as
contacts by exchanging links without having to meet in person.
We don't want to include the address of the user's long-term Tor hidden
service in the link, as we assume the link may be observed by an
adversary, who would then be able to use the availability of the hidden
service to tell whether the user was online at any future time.
We're considering two solutions to this issue. The first is to use a
temporary hidden service that's discarded after, say, 24 hours. The
address of the temporary hidden service is included in the link. This
limits the window during which the user's activity is exposed to an
adversary who observes the link, but it also requires the contact to use
the link before it expires.
The second solution is to include an ECDH public key in the link,
exchange links with the contact, and derive a hidden service key pair
from the shared secret. The key pair is known to both the user and the
contact. One of them publishes the hidden service, the other connects to
it. They exchange long-term hidden service addresses via the temporary
hidden service, which is then discarded.
Here's a third idea to think about: What if you use the same key
derivation trick we use in v3 onion services?
That is, every user could have a long term private key x and a public
key g^x. If the user with key g^x wants to allow a user with g^y to
meet them with them, they derive s=h( g^xy) as the shared secret...
but instead of using s as a private key, they do the key derivation
trick, so that the single-use public key is derived as (g^x)*(g^s) =
g^(x+s), and the single use secret key is derived as (x + s). This
way, each of them winds up with a private/public keypair for use with
the other; each user is the only one that knows their private key; and
the two of them are the only ones who can derive the public key.
You could do this in Tor's v3 onion-service design as-is: Just put
h(g^xy) as the the "optional secret s" input when deriving the daily
key.
For more info on this key derivation mechanism, see rend-spec-v3.txt,
appendix A.
(Warning: I am not a cryptographer; I haven't thought about this idea
very hard yet.)
Hi Nick,

This is a really interesting idea, thanks. I'm kicking myself because we
tried to come up with a way to derive a key pair such that the user gets
the private key and the contact gets the public key, and we couldn't
come up with anything - but it's already there as part of the HS design!

However, I'm much further away from being a cryptographer than you are,
and there are aspects of this that definitely go beyond my expertise.

The biggest one is that the user and her contact need to start with ECDH
key pairs (in order to derive a shared secret) and end up with an
Ed25519 key pair (in order to use it for a hidden service), whereas the
v3 key blinding scheme starts and ends with Ed21159 key pairs. I believe
there are ways to convert between X25519 and Ed25519 keys, but I don't
know what caveats would come with doing that, especially considering
that we want to reuse the DH keys for other exchanges.

I understand that the Tor devs are at a meeting this week, but hoping to
pick this up when you get back.

Cheers,
Michael
George Kadianakis
2018-10-15 18:11:04 UTC
Permalink
Post by Michael Rogers
Hi all,
The Briar team is working on a way for users to add each other as
contacts by exchanging links without having to meet in person.
We don't want to include the address of the user's long-term Tor hidden
service in the link, as we assume the link may be observed by an
adversary, who would then be able to use the availability of the hidden
service to tell whether the user was online at any future time.
We're considering two solutions to this issue. The first is to use a
temporary hidden service that's discarded after, say, 24 hours. The
address of the temporary hidden service is included in the link. This
limits the window during which the user's activity is exposed to an
adversary who observes the link, but it also requires the contact to use
the link before it expires.
The second solution is to include an ECDH public key in the link,
exchange links with the contact, and derive a hidden service key pair
from the shared secret. The key pair is known to both the user and the
contact. One of them publishes the hidden service, the other connects to
it. They exchange long-term hidden service addresses via the temporary
hidden service, which is then discarded.
The advantage of the second solution is that the user's link is static -
it doesn't expire and can be shared with any number of contacts. A
different shared secret, and thus a different temporary hidden service,
is used for adding each contact.
But using a hidden service in such a way that the client connecting to
the service knows the service's private key is clearly a departure from
the normal way of doing things. So before pursuing this idea I wanted to
check whether it's safe, in the sense that the hidden service still
conceals its owner's identity from the client.
Hello Michael,

Nick's trick seems like a reasonable way to avoid the issue with both parties
knowing the private key.

I have a separate question wrt the threat model:

It seems to me that adversary in this game can observe the link, and all
these stunts are done just for the case where the adversary steals the
link (i.e. the temporary ECDH public keys).

In that case, given that both Alice and Bob are completely
unauthenticated and there is no root trust, how can you ensure that the
adversary Eve won't perform the ECDH herself, then connect to the
temporary onion service, and steal the long-term onion service link
(thereby destroying the secrecy of the long-term onion service for ever,
even if the attack is detected in the future through Alice and Bob
communicating in an out-of-band way).

Are we assuming that Alice and Bob have no common shared-secret in
place? Because if they did, then you could use that from the start to
encrypt the long-term onion service identifier. If you don't, you could
potentially fall into attacks like the one above.

Cheers!
Michael Rogers
2018-10-17 18:27:32 UTC
Permalink
Hi George,
Post by George Kadianakis
Nick's trick seems like a reasonable way to avoid the issue with both parties
knowing the private key.
Thanks! Good to know. Any thoughts about how to handle the conversion
between ECDH and EdDSA keys?

If we decided not to use the key blinding trick, and just allowed both
parties to know the private key, do you see any attacks?
Post by George Kadianakis
It seems to me that adversary in this game can observe the link, and all
these stunts are done just for the case where the adversary steals the
link (i.e. the temporary ECDH public keys).
In that case, given that both Alice and Bob are completely
unauthenticated and there is no root trust, how can you ensure that the
adversary Eve won't perform the ECDH herself, then connect to the
temporary onion service, and steal the long-term onion service link
(thereby destroying the secrecy of the long-term onion service for ever,
even if the attack is detected in the future through Alice and Bob
communicating in an out-of-band way).
Are we assuming that Alice and Bob have no common shared-secret in
place? Because if they did, then you could use that from the start to
encrypt the long-term onion service identifier. If you don't, you could
potentially fall into attacks like the one above.
Great question. We assume the channel over which the links are exchanged
is insecure, so an adversary who can read and modify the channel can
carry out a man-in-the-middle attack as you describe. However, we also
assume there are some adversaries that can read but not modify the
channel, and it's worth protecting against those adversaries even if we
can't protect against stronger ones that can also modify the channel.

A realistic example of a read-only adversary would be one that gets
retrospective access to chat logs.

As you pointed out, modification can potentially be detected later
(although we haven't designed the protocol for doing that yet). I guess
that may help to deter adversaries who don't want to reveal that they
can read and modify the channel.

Cheers,
Michael
George Kadianakis
2018-10-18 12:26:02 UTC
Permalink
Post by Michael Rogers
Hi George,
Post by George Kadianakis
Nick's trick seems like a reasonable way to avoid the issue with both parties
knowing the private key.
Thanks! Good to know. Any thoughts about how to handle the conversion
between ECDH and EdDSA keys?
Hmm, that's a tricky topic! Using the same x25519 keypair for DH and
signing is something that should be done only under supervision by a
proper cryptographer(tm). I'm not a proper cryptographer so I'm
literally unable to evaluate whether doing it in your case would be
secure. If it's possible I would avoid it altogether...

I think one of the issues is that when you transform your x25519 DH key
to an ed25519 key and use it for signing, if the attacker is able to
choose what you sign, the resulting signature will basically provide a
DH oracle to the attacker, which can result in your privkey getting
completely pwned. We actually do this x25519<->ed255519 conversion for
onionkeys cross-certificates (proposal228) but we had the design
carefully reviewed by people who know what's going on (unlike me).

In your case, the resulting ed25519 key would be used to sign the
temporary HS descriptor. The HS descriptor is of course not entirely
attacker-controlled data, but part of it *could be considered* attacker
controlled (e.g. the encrypted introduction points), and I really don't
know whether security can be impacted in this case. Also there might be
other attacks that I'm unaware of... Again, you need a proper
cryptographer for this.

A cheap way to avoid this, might be to include both an x25519 and an
ed25519 key in the "link" you send to the other person. You use the
x25519 key to do the DH and derive the shared secret, and then both
parties use the shared secret to blind the ed25519 key and derive the
blinded (aka hierarchically key derived) temporary onion service
address... Maybe that works for you but it will increase the link size
to double, which might impact UX.

And talking about UX, this is definitely a messy protocol UX-wise. One
person has to wait for the other person to start up a temporary HS. What
happens if the HS never comes up? Every when does the other person check
for the HS? How long does the first person keep the HS up? You can
probably hide all these details on the UI, but it still seems like a
messy situation.

I think the easiest approach here would be to just encrypt the permanent
onion address to the other person using a pre-shared-secret, but I guess
your protocol assumes that the two participants don't really know each other...

Good luck! :)
Michael Rogers
2018-10-19 10:51:58 UTC
Permalink
Post by George Kadianakis
Post by Michael Rogers
Hi George,
Post by George Kadianakis
Nick's trick seems like a reasonable way to avoid the issue with both parties
knowing the private key.
Thanks! Good to know. Any thoughts about how to handle the conversion
between ECDH and EdDSA keys?
Hmm, that's a tricky topic! Using the same x25519 keypair for DH and
signing is something that should be done only under supervision by a
proper cryptographer(tm). I'm not a proper cryptographer so I'm
literally unable to evaluate whether doing it in your case would be
secure. If it's possible I would avoid it altogether...
I think one of the issues is that when you transform your x25519 DH key
to an ed25519 key and use it for signing, if the attacker is able to
choose what you sign, the resulting signature will basically provide a
DH oracle to the attacker, which can result in your privkey getting
completely pwned. We actually do this x25519<->ed255519 conversion for
onionkeys cross-certificates (proposal228) but we had the design
carefully reviewed by people who know what's going on (unlike me).
In your case, the resulting ed25519 key would be used to sign the
temporary HS descriptor. The HS descriptor is of course not entirely
attacker-controlled data, but part of it *could be considered* attacker
controlled (e.g. the encrypted introduction points), and I really don't
know whether security can be impacted in this case. Also there might be
other attacks that I'm unaware of... Again, you need a proper
cryptographer for this.
Thanks, that confirms my reservations about converting between ECDH and
EdDSA keys, especially when we don't fully control what each key will be
used for. I think we'd better hold off on that approach unless/until the
crypto community comes up with idiot-proof instructions.
Post by George Kadianakis
A cheap way to avoid this, might be to include both an x25519 and an
ed25519 key in the "link" you send to the other person. You use the
x25519 key to do the DH and derive the shared secret, and then both
parties use the shared secret to blind the ed25519 key and derive the
blinded (aka hierarchically key derived) temporary onion service
address... Maybe that works for you but it will increase the link size
to double, which might impact UX.
Nice! Link size aside, that sounds like it ought to work.

A given user's temporary hidden service addresses would all be related
to each other in the sense of being derived from the same root Ed25519
key pair. If I understand right, the security proof for the key blinding
scheme says the blinded keys are unlinkable from the point of view of
someone who doesn't know the root public key (and obviously that's a
property the original use of key blinding requires). I don't think the
proof says whether the keys are unlinkable from the point of view of
someone who does know the root public key, but doesn't know the blinding
factors (which would apply to the link-reading adversary in this case,
and also to each contact who received a link). It seem like common sense
that you can't use the root key (and one blinding factor, in the case of
a contact) to find or distinguish other blinded keys without knowing the
corresponding blinding factors. But what seems like common sense to me
doesn't count for much in crypto...

We'd also have to be careful about the number of blinded keys generated
from a given root key. The security proof uses T = 2^16 as an example
for the maximum number of epochs, giving a 16-bit security loss vs
normal Ed25519. In this scheme T would be the maximum number of contacts
rather than epochs. 2^16 is more than enough for the current context,
where contacts are added manually, but we'd have to bear in mind that it
wouldn't be safe to use this for automatic exchanges initiated by other
parties.
Post by George Kadianakis
And talking about UX, this is definitely a messy protocol UX-wise. One
person has to wait for the other person to start up a temporary HS. What
happens if the HS never comes up? Every when does the other person check
for the HS? How long does the first person keep the HS up? You can
probably hide all these details on the UI, but it still seems like a
messy situation.
Messy? Yeah, welcome to P2P. ;-)

We're testing a prototype of the UX at the moment.

Bringing up the hidden service tends to take around 30 seconds, which is
a long time if you make the user sit there and watch a progress wheel,
but not too bad if you let them go away and do other things until a
notification tells them it's done.

Of course that's the happy path, where the contact's online and has
already opened the user's link. If the contact sent their link and then
went offline, the user has to wait for them to come back online. So we
keep a list of pending contact requests and show the status for each
one. After some time, perhaps 7 days, we stop trying to connect and mark
the contact request as failed.

(In my first email I mentioned an alternative approach, where we set up
a temporary hidden service in advance and just send its address in the
link, which expires after 24 hours. In that case we can shave 30 seconds
off the happy path, but we need to work out the UX for explaining that
links will expire and dealing with expired links. So there are pros and
cons to both approaches.)
Post by George Kadianakis
I think the easiest approach here would be to just encrypt the permanent
onion address to the other person using a pre-shared-secret, but I guess
your protocol assumes that the two participants don't really know each other...
Sure, a pre-shared secret would solve all our problems, but the use case
here is adding contacts without meeting in person. We already have a
completely different, equally messy UX for users who can meet face to
face. ;-)
Post by George Kadianakis
Good luck! :)
Thanks!

Cheers,
Michael
George Kadianakis
2018-10-19 13:01:26 UTC
Permalink
Post by Michael Rogers
Post by George Kadianakis
Post by Michael Rogers
Hi George,
Post by George Kadianakis
Nick's trick seems like a reasonable way to avoid the issue with both parties
knowing the private key.
Thanks! Good to know. Any thoughts about how to handle the conversion
between ECDH and EdDSA keys?
Hmm, that's a tricky topic! Using the same x25519 keypair for DH and
signing is something that should be done only under supervision by a
proper cryptographer(tm). I'm not a proper cryptographer so I'm
literally unable to evaluate whether doing it in your case would be
secure. If it's possible I would avoid it altogether...
I think one of the issues is that when you transform your x25519 DH key
to an ed25519 key and use it for signing, if the attacker is able to
choose what you sign, the resulting signature will basically provide a
DH oracle to the attacker, which can result in your privkey getting
completely pwned. We actually do this x25519<->ed255519 conversion for
onionkeys cross-certificates (proposal228) but we had the design
carefully reviewed by people who know what's going on (unlike me).
In your case, the resulting ed25519 key would be used to sign the
temporary HS descriptor. The HS descriptor is of course not entirely
attacker-controlled data, but part of it *could be considered* attacker
controlled (e.g. the encrypted introduction points), and I really don't
know whether security can be impacted in this case. Also there might be
other attacks that I'm unaware of... Again, you need a proper
cryptographer for this.
Thanks, that confirms my reservations about converting between ECDH and
EdDSA keys, especially when we don't fully control what each key will be
used for. I think we'd better hold off on that approach unless/until the
crypto community comes up with idiot-proof instructions.
Post by George Kadianakis
A cheap way to avoid this, might be to include both an x25519 and an
ed25519 key in the "link" you send to the other person. You use the
x25519 key to do the DH and derive the shared secret, and then both
parties use the shared secret to blind the ed25519 key and derive the
blinded (aka hierarchically key derived) temporary onion service
address... Maybe that works for you but it will increase the link size
to double, which might impact UX.
Nice! Link size aside, that sounds like it ought to work.
A given user's temporary hidden service addresses would all be related
to each other in the sense of being derived from the same root Ed25519
key pair. If I understand right, the security proof for the key blinding
scheme says the blinded keys are unlinkable from the point of view of
someone who doesn't know the root public key (and obviously that's a
property the original use of key blinding requires). I don't think the
proof says whether the keys are unlinkable from the point of view of
someone who does know the root public key, but doesn't know the blinding
factors (which would apply to the link-reading adversary in this case,
and also to each contact who received a link). It seem like common sense
that you can't use the root key (and one blinding factor, in the case of
a contact) to find or distinguish other blinded keys without knowing the
corresponding blinding factors. But what seems like common sense to me
doesn't count for much in crypto...
Hm, where did you get this about the security proof? The only security
proof I know of is https://www-users.cs.umn.edu/~hoppernj/basic-proof.pdf and I don't see
that assumption anywhere in there, but it's also been a long while since
I read it.

I think in general you are OK here. An informal argument: according to
rend-spec-v3.txt appendix A.2 the key derivation is as follows:

derived private key: a' = h a (mod l)
derived public key: A' = h A = (h a) B

In your case, the attacker does not know 'h' (the blinding factor),
whereas in the case of onion service the attacker does not know 'a' or
'a*B' (the private/public key). In both cases, the attacker is missing
knowledge of a secret scalar, so it does not seem to make a difference
which scalar the attacker does not know.

Of course, the above is super informal, and I'm not a cryptographer,
yada yada.
Post by Michael Rogers
We'd also have to be careful about the number of blinded keys generated
from a given root key. The security proof uses T = 2^16 as an example
for the maximum number of epochs, giving a 16-bit security loss vs
normal Ed25519. In this scheme T would be the maximum number of contacts
rather than epochs. 2^16 is more than enough for the current context,
where contacts are added manually, but we'd have to bear in mind that it
wouldn't be safe to use this for automatic exchanges initiated by other
parties.
Post by George Kadianakis
And talking about UX, this is definitely a messy protocol UX-wise. One
person has to wait for the other person to start up a temporary HS. What
happens if the HS never comes up? Every when does the other person check
for the HS? How long does the first person keep the HS up? You can
probably hide all these details on the UI, but it still seems like a
messy situation.
Messy? Yeah, welcome to P2P. ;-)
We're testing a prototype of the UX at the moment.
Bringing up the hidden service tends to take around 30 seconds, which is
a long time if you make the user sit there and watch a progress wheel,
but not too bad if you let them go away and do other things until a
notification tells them it's done.
Of course that's the happy path, where the contact's online and has
already opened the user's link. If the contact sent their link and then
went offline, the user has to wait for them to come back online. So we
keep a list of pending contact requests and show the status for each
one. After some time, perhaps 7 days, we stop trying to connect and mark
the contact request as failed.
Yeah, I don't think a progress wheel is what you want here. You probably
want a greyed out contact saying "Contact pending..." like in the case
of adding a contact in Ricochet.
Post by Michael Rogers
(In my first email I mentioned an alternative approach, where we set up
a temporary hidden service in advance and just send its address in the
link, which expires after 24 hours. In that case we can shave 30 seconds
off the happy path, but we need to work out the UX for explaining that
links will expire and dealing with expired links. So there are pros and
cons to both approaches.)
Michael Rogers
2018-10-22 10:51:08 UTC
Permalink
Post by George Kadianakis
Post by Michael Rogers
A given user's temporary hidden service addresses would all be related
to each other in the sense of being derived from the same root Ed25519
key pair. If I understand right, the security proof for the key blinding
scheme says the blinded keys are unlinkable from the point of view of
someone who doesn't know the root public key (and obviously that's a
property the original use of key blinding requires). I don't think the
proof says whether the keys are unlinkable from the point of view of
someone who does know the root public key, but doesn't know the blinding
factors (which would apply to the link-reading adversary in this case,
and also to each contact who received a link). It seem like common sense
that you can't use the root key (and one blinding factor, in the case of
a contact) to find or distinguish other blinded keys without knowing the
corresponding blinding factors. But what seems like common sense to me
doesn't count for much in crypto...
Hm, where did you get this about the security proof? The only security
proof I know of is https://www-users.cs.umn.edu/~hoppernj/basic-proof.pdf and I don't see
that assumption anywhere in there, but it's also been a long while since
I read it.
I may have misunderstood the paper, but I was talking about the
unlinkability property defined in section 4.1.

If I understand right, the proof says that descriptors created with a
given identity key are unlinkable to each other, in the sense that an
adversary who's allowed to query for descriptors created with the
identity key can't tell whether one of the descriptors has been replaced
with one created with a different identity key.

It seems to follow that the blinded keys used to sign the descriptors*
are unlinkable, in the sense that an adversary who's allowed to query
for blinded keys derived from the identity key can't tell whether one of
the blinded keys has been replaced with one derived from a different
identity key - otherwise the adversary could use that ability to
distinguish the corresponding descriptors.

What I was trying to say before is that although I don't understand the
proof in section 5.1 of the paper, I *think* it's based on an adversary
who only sees the descriptors and doesn't also know the identity public
key. This is totally reasonable for the original setting, where we're
not aiming to provide unlinkability from the perspective of someone who
knows the identity public key. But it becomes problematic in this new
setting we're discussing, where the adversary is assumed to know the
identity public key and we still want the blinded keys to be unlinkable.

* OK, strictly speaking the blinded keys aren't used to sign the
descriptors directly, they're used to certify descriptor-signing keys -
but the paper argues that the distinction doesn't affect the proof.
Post by George Kadianakis
I think in general you are OK here. An informal argument: according to
derived private key: a' = h a (mod l)
derived public key: A' = h A = (h a) B
In your case, the attacker does not know 'h' (the blinding factor),
whereas in the case of onion service the attacker does not know 'a' or
'a*B' (the private/public key). In both cases, the attacker is missing
knowledge of a secret scalar, so it does not seem to make a difference
which scalar the attacker does not know.
Of course, the above is super informal, and I'm not a cryptographer,
yada yada.
I agree it seems like it should be safe. My point is really just that we
seem to have gone beyond what's covered by the proof, which tends to
make me think I should prefer a solution that I understand a bit better.

(At the risk of wasting your time though, I just want to suggest an
interesting parallel. Imagine we're just dealing with a single ordinary
key pair, no blinding involved. The public key X = xB, where x is the
private key and B is the base point. Now obviously we rely on this property:

1. Nobody can find x given X and B

But we don't usually require that:

2. Nobody can tell whether public keys X and Y share the same base point
without knowing x, y, or the base point
3. Nobody can tell whether X has base point B without knowing x

We don't usually care about these properties because the base point is
public knowledge. But in the key blinding setting, the base point is
replaced with the identity public key. As far as I can see, the proof in
the paper covers property 2 but not property 3. I'm certainly not saying
that I know whether property 3 is true - I just want to point out that
it seems to be distinct from properties 1 and 2.)
Post by George Kadianakis
Post by Michael Rogers
We're testing a prototype of the UX at the moment.
Bringing up the hidden service tends to take around 30 seconds, which is
a long time if you make the user sit there and watch a progress wheel,
but not too bad if you let them go away and do other things until a
notification tells them it's done.
Of course that's the happy path, where the contact's online and has
already opened the user's link. If the contact sent their link and then
went offline, the user has to wait for them to come back online. So we
keep a list of pending contact requests and show the status for each
one. After some time, perhaps 7 days, we stop trying to connect and mark
the contact request as failed.
Yeah, I don't think a progress wheel is what you want here. You probably
want a greyed out contact saying "Contact pending..." like in the case
of adding a contact in Ricochet.
Right, what we have at the moment is essentially that, but the pending
contacts are in a separate list. Moving them into the main contact list
might be clearer though - thanks for the idea.

Cheers,
Michael
Leif Ryge
2018-10-19 15:05:12 UTC
Permalink
On Wed, Oct 17, 2018 at 07:27:32PM +0100, Michael Rogers wrote:
[...]
Post by Michael Rogers
If we decided not to use the key blinding trick, and just allowed both
parties to know the private key, do you see any attacks?
[...]

If I'm understanding your proposal correctly, I believe it would leave
you vulnerable to a Key Compromise Impersonation (KCI) attack.

Imagine the scenario where Alice and Bob exchange the information to
establish their temporary rendezvous onion which they both know the
private key to, and they agree that Bob will be the client and Alice
will be the server.

But, before Bob connects to it, the adversary somehow obtains a copy of
everything Bob knows (but they don't have the ability to modify data or
software on his computer - they just got a copy of his secrets somehow).

Obviously the adversary could then impersonate Bob to Alice, because
they know everything that Bob knows. But, perhaps less obviously, in the
case that Bob is supposed to connect to Alice's temporary onion which
Bob (and now the adversary) know the key to, the adversary can also
now impersonate Alice to Bob (by overwriting Alice's descriptors and
taking over her temporary onion service).

In this scenario, if Bob hadn't known the private key for Alice's
temporary onion that he is connecting to, impersonating Alice to Bob
would not have been possible.

And of course, if the adversary can successfully impersonate both
parties to eachother at this phase, they can provide their own long-term
keys to each of them, and establish a long-term bidirectional mitm -
which, I think, would otherwise not be possible even in the event that
one party's long-term key was compromised.

Bob losing control of the key material before using it (without his
computer being otherwise compromised) admittedly seems like an unlikely
scenario, but you asked for "any attacks", so, I think KCI is one (if
I'm understanding your proposal correctly).

~leif
Michael Rogers
2018-10-22 14:26:08 UTC
Permalink
Post by Leif Ryge
[...]
Post by Michael Rogers
If we decided not to use the key blinding trick, and just allowed both
parties to know the private key, do you see any attacks?
[...]
If I'm understanding your proposal correctly, I believe it would leave
you vulnerable to a Key Compromise Impersonation (KCI) attack.
Imagine the scenario where Alice and Bob exchange the information to
establish their temporary rendezvous onion which they both know the
private key to, and they agree that Bob will be the client and Alice
will be the server.
But, before Bob connects to it, the adversary somehow obtains a copy of
everything Bob knows (but they don't have the ability to modify data or
software on his computer - they just got a copy of his secrets somehow).
Obviously the adversary could then impersonate Bob to Alice, because
they know everything that Bob knows. But, perhaps less obviously, in the
case that Bob is supposed to connect to Alice's temporary onion which
Bob (and now the adversary) know the key to, the adversary can also
now impersonate Alice to Bob (by overwriting Alice's descriptors and
taking over her temporary onion service).
In this scenario, if Bob hadn't known the private key for Alice's
temporary onion that he is connecting to, impersonating Alice to Bob
would not have been possible.
And of course, if the adversary can successfully impersonate both
parties to eachother at this phase, they can provide their own long-term
keys to each of them, and establish a long-term bidirectional mitm -
which, I think, would otherwise not be possible even in the event that
one party's long-term key was compromised.
Bob losing control of the key material before using it (without his
computer being otherwise compromised) admittedly seems like an unlikely
scenario, but you asked for "any attacks", so, I think KCI is one (if
I'm understanding your proposal correctly).
Hi Leif,

Thanks for pointing this out - I'd heard about this kind of attack but
I'd forgotten to consider it.

We're planning to do a key exchange at the application layer after
making the hidden service connection, so I don't think the adversary's
ability to impersonate Alice's hidden service to Bob would necessarily
lead to application-layer impersonation on its own. But if you hadn't
raised this we might have designed the application-layer key exchange in
a way that was vulnerable to KCI as well, so thanks!

Cheers,
Michael

Loading...