Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

possible enhancement: simplify sgn0 #144

Closed
kwantam opened this issue Jul 6, 2019 · 8 comments
Closed

possible enhancement: simplify sgn0 #144

kwantam opened this issue Jul 6, 2019 · 8 comments

Comments

@kwantam
Copy link
Collaborator

kwantam commented Jul 6, 2019

There's a potential enhancement/improvement to sgn0 that we might want to consider. This option may have been discussed a little before, but it's probably worth revisiting before things get too ossified.

The current version of sgn0 says that an element x in GF(p) is negative just when it is greater than (p - 1) / 2. (Things are slightly more complicated for GF(p^m), but let's ignore that for now.)

Since p is odd, an alternative means of differentiating between x and -x is via the least significant bit: if the least significant bit of x is set, call x negative, otherwise call it positive. (This works because p - x and x have opposite LSBs, except when x = 0.)

The major advantage to this approach is that it's much easier to check a single bit in constant time than it is to compare a value to (p-1)/2. On the other hand, I'd guess that in many cases a constant-time comparison function is already available, so maybe this isn't such a decisive advantage after all.

The major disadvantage (if it can be called that) is that it leads to slightly nonintuitive definitions of "negative." For example, normally we'd think of 1 as positive, but by the parity definition, sgn0(1) == -1, and sgn0(p - 1) == 1. But maybe that's OK.

Thoughts?

Since this would be a breaking change, sooner is better than later. Also, it might change the suites definitions, since some suites have a constant Z that is "smallest in absolute value, breaking ties by choosing the positive one," and changing the definition of sign might change that.

@armfazh
Copy link
Collaborator

armfazh commented Jul 30, 2019

From the implementation side, field arithmetic usually is implemented using redundant representations, in some cases, lazy-reduction (modulo a multiple of a prime) are used too.

In both cases, comparison or parity require a conversion from the internal representation to recover its unique representation 0<= x < p, and thus to determine x > p/2 or x mod 2.
In summary, from the implementation side (when using redundant representations) both operations are annoying.

Even though checking parity is simpler, I consider that x > (p-1)/2 is well understood and does not present odd semantic cases (as the ones you mentioned above).

@kwantam
Copy link
Collaborator Author

kwantam commented Jul 30, 2019

Yeah, your point about redundant representations is a good one.

Even if not using redundant representations, Montgomery arithmetic will require a multiplication before doing either comparison or LSB check---so performance-wise, probably no change, just as you say.

@chris-wood
Copy link
Collaborator

From an offline exchange, we might also specify sgn0 per suite.

@kwantam
Copy link
Collaborator Author

kwantam commented Oct 13, 2019

@chris-wood, I spoke with @reyzin about this and I think your suggestion is a good one. I'll put together a PR that defines the two or three standard sgn0 variants and then sets the variant per-suite.

I'm not fully sold on this since it increases the complexity of the document a bit, but let's take a look.

@chris-wood
Copy link
Collaborator

Sounds good!

@kwantam
Copy link
Collaborator Author

kwantam commented Oct 14, 2019

OK, I looked at IEEE 1363a-2004 and it looks like we can cover that serialization format, the BLS12-381ish format, and the Ed25519ish format with just two sgn0 variants. I'm working on a branch that breaks these out. I still need to do some work to figure out who's actually implementing which serialization method for each curve, though.

@chris-wood do you happen to know where the elliptic curve point serialization format in TLS comes from? That is, is it defined globally or per curve? I'll happily go dig through the RFC and impls, I'm just wondering if you remember off the top of your head.

@chris-wood
Copy link
Collaborator

chris-wood commented Oct 14, 2019

is it defined globally or per curve?

Per-curve, but there's an extension that lets endpoints specify the format, too. (Uncompressed is the default for older curves. Curve25519 and newer curves specify the format as part of the group.)

@kwantam
Copy link
Collaborator Author

kwantam commented Oct 16, 2019

Got it---thanks!

It looks like essentially everyone except the pairing-friendly curves is using X9.62 (aka SEC1) point compression. This makes life relatively easy.

kwantam added a commit to kwantam/draft-irtf-cfrg-hash-to-curve that referenced this issue Oct 16, 2019
kwantam added a commit to kwantam/draft-irtf-cfrg-hash-to-curve that referenced this issue Oct 23, 2019
kwantam added a commit to kwantam/draft-irtf-cfrg-hash-to-curve that referenced this issue Oct 23, 2019
kwantam added a commit to kwantam/draft-irtf-cfrg-hash-to-curve that referenced this issue Oct 23, 2019
kwantam added a commit to kwantam/draft-irtf-cfrg-hash-to-curve that referenced this issue Oct 25, 2019
kwantam added a commit to kwantam/draft-irtf-cfrg-hash-to-curve that referenced this issue Oct 26, 2019
kwantam added a commit to kwantam/draft-irtf-cfrg-hash-to-curve that referenced this issue Oct 26, 2019
kwantam added a commit to kwantam/draft-irtf-cfrg-hash-to-curve that referenced this issue Oct 26, 2019
kwantam added a commit to kwantam/draft-irtf-cfrg-hash-to-curve that referenced this issue Oct 27, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants