-
Notifications
You must be signed in to change notification settings - Fork 25
Leakage to success #206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Leakage to success #206
Conversation
cassiersg
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need to read this more in detail, but here are a few comments to begin with.
| ) -> np.ndarray[float]: | ||
| p = np.exp(log_p) | ||
| q = np.exp(log_q) | ||
| return p * (log_p - log_q) + (1 - p) * (np.log1p(-p) - np.log1p(-q)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1-p -> -np.expm1(log_p) ?
Also, I think (not sure) that np.log(-np.expm1(log_p)) would be more stable than np.log1p(-np.exp(log_p)) (same for q).
| log_p_ub = np.array(0).reshape((1,)) | ||
|
|
||
| # Dichotomic serach | ||
| for _ in range(niter): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we use interval-based root finding methods of scipy instead of dichotomic search?
(At least an interval-based method, perhaps even Halley's method?)
| @@ -0,0 +1,331 @@ | |||
| r"""Lower bounds on the log-guessing entropy, log of the guessing entropy, log of the median rank and upper bound on the proability of a successful attack in presence of key enumeration. | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to explain add explanations for novice users who do not really know what they need. E.g., intuition of what those functions are, typical case where they would be used. What are the differences between them, and add definitions of the concepts: rank, guessing entropy, log guessing entropy.
| The bounds depends on the leakage as measured by mutual information, the size of the secret key (number of bits) and eventually the number of key enumerated. | ||
|
|
||
| Examples | ||
| -------- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Examples should be per-function, I think. Also, it'd be nice to have comments to the code to explain non-obvious parameters (e.g. what does it mean to have 1000 MI values?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure I will modify the doc structure
| return log_p_ub / np.log(base) # in base 'base' | ||
|
|
||
|
|
||
| def f(x): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs a better name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I modified for massey_inequality (I will add a reference with a comment to explain why it corresponds to massey's inequality)
| x_lb = np.maximum(np.log(1), y - 1 + np.log1p(np.exp(1 - y) / 2)).reshape((-1,)) | ||
| x_ub = np.maximum(np.log(2), y - 1 + np.log1p(np.exp(1 - y) / 2 + 1)).reshape((-1,)) | ||
|
|
||
| # Dichotomic search |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we also use a better root finding method here ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about Chandrupatla's algorithm (with a vectorized scipy implementation) ?
Apparently it is better than Brent and features guaranteed convergence as with dichotomy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact since the function change it times it cannot be vectorized this way
I used brent method and unvectorized the method.
If there is a speed bottleneck here we can rustify it easily
| return np.sum(I ** -np.expand_dims(a, axis=0), axis=0) | ||
|
|
||
|
|
||
| def euler_maclaurin_correction(a, k, log_M, order): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be good to have an explanation of what each function computes (can be as simple as a reference to an equation in a paper).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For sure I will include some more details
|
|
||
| def guessing_entropy(mutual_information, key_size, base=2): | ||
| r"""Output a lower bound on the logarithm in base 'base' of the guessing entropy | ||
| when a leakage upper bounded by the mutual information 'mutual_information' is disclosed to the adversary. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think having base impact both the interpretation of the MI and the GE is a bug trap. I think we can keep GE always in bits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok then I believe the same should apply to the log-guessing entropy, median and success rate to be consistant
|
Regarding the organization: it would make sense to group the new functions and the one of #205 in a single module that would deal with all post-processing of information bounds (e.g., |
Should I merge both PRs and create a new one with this organization ? |
Code, docs and test to bound side channel figure of merits in terms of mutual information