Skip to content

Conversation

@JulienBeg
Copy link
Contributor

Code, docs and test to bound side channel figure of merits in terms of mutual information

@cla-bot cla-bot bot added the cla-signed label Oct 14, 2025
Copy link
Contributor

@cassiersg cassiersg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to read this more in detail, but here are a few comments to begin with.

) -> np.ndarray[float]:
p = np.exp(log_p)
q = np.exp(log_q)
return p * (log_p - log_q) + (1 - p) * (np.log1p(-p) - np.log1p(-q))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1-p -> -np.expm1(log_p) ?

Also, I think (not sure) that np.log(-np.expm1(log_p)) would be more stable than np.log1p(-np.exp(log_p)) (same for q).

log_p_ub = np.array(0).reshape((1,))

# Dichotomic serach
for _ in range(niter):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we use interval-based root finding methods of scipy instead of dichotomic search?
(At least an interval-based method, perhaps even Halley's method?)

@@ -0,0 +1,331 @@
r"""Lower bounds on the log-guessing entropy, log of the guessing entropy, log of the median rank and upper bound on the proability of a successful attack in presence of key enumeration.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to explain add explanations for novice users who do not really know what they need. E.g., intuition of what those functions are, typical case where they would be used. What are the differences between them, and add definitions of the concepts: rank, guessing entropy, log guessing entropy.

The bounds depends on the leakage as measured by mutual information, the size of the secret key (number of bits) and eventually the number of key enumerated.

Examples
--------
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Examples should be per-function, I think. Also, it'd be nice to have comments to the code to explain non-obvious parameters (e.g. what does it mean to have 1000 MI values?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure I will modify the doc structure

return log_p_ub / np.log(base) # in base 'base'


def f(x):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs a better name.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I modified for massey_inequality (I will add a reference with a comment to explain why it corresponds to massey's inequality)

x_lb = np.maximum(np.log(1), y - 1 + np.log1p(np.exp(1 - y) / 2)).reshape((-1,))
x_ub = np.maximum(np.log(2), y - 1 + np.log1p(np.exp(1 - y) / 2 + 1)).reshape((-1,))

# Dichotomic search
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we also use a better root finding method here ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about Chandrupatla's algorithm (with a vectorized scipy implementation) ?

See: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.elementwise.find_root.html#scipy.optimize.elementwise.find_root

Apparently it is better than Brent and features guaranteed convergence as with dichotomy

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In fact since the function change it times it cannot be vectorized this way
I used brent method and unvectorized the method.
If there is a speed bottleneck here we can rustify it easily

return np.sum(I ** -np.expand_dims(a, axis=0), axis=0)


def euler_maclaurin_correction(a, k, log_M, order):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be good to have an explanation of what each function computes (can be as simple as a reference to an equation in a paper).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For sure I will include some more details


def guessing_entropy(mutual_information, key_size, base=2):
r"""Output a lower bound on the logarithm in base 'base' of the guessing entropy
when a leakage upper bounded by the mutual information 'mutual_information' is disclosed to the adversary.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think having base impact both the interpretation of the MI and the GE is a bug trap. I think we can keep GE always in bits.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok then I believe the same should apply to the log-guessing entropy, median and success rate to be consistant

@cassiersg
Copy link
Contributor

Regarding the organization: it would make sense to group the new functions and the one of #205 in a single module that would deal with all post-processing of information bounds (e.g., postprocessing.information).

@JulienBeg
Copy link
Contributor Author

Regarding the organization: it would make sense to group the new functions and the one of #205 in a single module that would deal with all post-processing of information bounds (e.g., postprocessing.information).

Should I merge both PRs and create a new one with this organization ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants