[Cryptography] Stupid question on S-boxes Crypto
For quite a while, S-boxes have been designed to resist linear and differential cryptography. The problem with small S-boxes is that you need a lot of diffusion to spread the confusion around, and you need a number of "rounds" to achieve this. But now that we know a lot more about how to design S-boxes, how come we don't skip the Feistel stuff and round iterations entirely, and simply use larger S-boxes? I.e., if there are constructions which build large S-boxes from smaller ones, why don't we just do that? _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
At 05:40 PM 1/24/2019, Bill Cox wrote: >S-boxes leak secret key info through cache timing attacks. IMO, they should be avoided. I'm not sure what you're saying here. Are you saying that you shouldn't implement S-boxes in software? Clearly S-boxes are *already* being implemented in software, although they tend to be *small* and *a priori fixed* S-boxes -- e.g., DES, AES. So yes, the same sorts of masking and other side-channel defenses continue to be required. So "cache timing attacks" can't be a legitimate argument against the use of "large" -- e.g., 128-bit -- S-boxes, per se. You might argue that a 128-bit S-box is *too* large to implement in proper hardware, and that may be a (current) legitimate concern. But in a 7nm world, there's less and less that's "too large". You might be suggesting that the current "large" S-boxes can't be implemented sufficiently efficiently to be useful in practise, and you might be (currently) correct about that. However, as the history of hardware has shown, critical tasks tend to get optimized quite quickly, so I'm currently asking a theoretical question that might eventually become non-theoretical. You might also worry that I'm suggesting a large *dynamic* S-box, whose construction might somehow be encoded into a private key. I'm not (currently) suggesting this, as AES has done just fine with a priori fixed S-boxes, so I suspect that dynamic S-boxes won't be necessary, although some have argued that dynamic S-boxes avoid rainbow table-type attacks. Once again, I'm suggesting that the Feistel structure and multiple rounds is an attempt to build larger S-boxes from smaller ones, and we're getting to the point where these newer constructions may be better than the Feistel/round mechanisms for performing this recursive construction. That's my basic question. _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
> S-boxes leak secret key info through cache timing attacks. IMO, they should be avoided. Responding more directly to Henry's comment: Small S-boxes, if the code is properly arranged, can stay entirely in the cache - you can access every entry in the table up front to get them all in there, for example - so are less likely to leak information through cache timing attacks. Large S-boxes, on the other hand, are more likely to get partially loaded/knocked out of the cache, giving more purchase to cache timing attacks. Then again, it's not just S-boxes and it's not just caches. TLBleed instead uses the TLB to grab EdDSA keys. I'd say the ability to safely do crypto on shared hardware is very much an open question at this point. Completely isolated co-processors - whether fixed-algorithm (now fairly common) or loadable (I don't know enough about the internals of Apple's T2 - it does crypto in such a co-processor, but whether the crypto algorithms it runs are hard-wired or in replaceable firmware I don't know) - may be the only way forward. -- Jerry _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
At 10:52 AM 1/25/2019, Jerry Leichter wrote: >> S-boxes leak secret key info through cache timing attacks. IMO, they should be avoided. >Responding more directly to Henry's comment: Small S-boxes, if the code is properly arranged, can stay entirely in the cache - you can access every entry in the table up front to get them all in there, for example - so are less likely to leak information through cache timing attacks. Large S-boxes, on the other hand, are more likely to get partially loaded/knocked out of the cache, giving more purchase to cache timing attacks. The last time I looked (several years ago), there appeared to be *no way* on many architectures to empty a cache line *without writing it to memory*, and more importantly, no way to guarantee that a cache line is *never written to memory*. There used to be architectures that would guarantee that a memory reference would *bypass* the cache(s), but that's now so slow that no one seems to care about that any more. As is becoming obvious, our current models of programming languages and hardware architectures are completely inadequate to the task of assuring no bit leaks. We need new *typing models* for programming languages, and new constraints for HW that guarantee that certain bits will only go where they're told to go, and *no place else*. Yes, there will continue to be devastating DPA attacks, but for the vast majority of privacy issues, it would be great that anyone pawing through the trash (memory, registers, caches, etc.) won't see private bits. For example, caches might want to become *exclusive* -- i.e., an (address,value) can't reside in more than one location (memory,L3,L2,L1,etc.) at one time; it might even be interesting to include *registers* in this list (yes, tagging will be required, but is long overdue for modern architectures). Perhaps this constraint should only apply to pages marked RW, as some RO pages -- e.g., shared .exe pages -- might be less interesting. _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
On Fri, 25 Jan 2019, Jerry Leichter wrote: > I'd say the ability to safely do crypto on shared hardware is very much > an open question at this point. I'd say the answer to this question is a NO. > Completely isolated co-processors - > [...] > - may be the only way forward. What is the difference between a shared CPU and a shared (isolated) co-processor ? -- ralf _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
> >> I'd say the ability to safely do crypto on shared hardware is very much an open question at this point. > > I'd say the answer to this question is a NO. > >> Completely isolated co-processors - [...] >> - may be the only way forward. > > What is the difference between a shared CPU and a shared (isolated) co-processor ? You wouldn't share the co-processor: At any one time, it should only be accessible to a single security context. And you'd reset it to a constant state between security context switches. The issue here is side-channel attacks. If there are no channels between the crypto processing and code controlled by attackers, there is no attack against the crypto processing. The problem, of course, is that "channels" is extremely open-ended. But a co-processor with its own private memory does limit the possible attacks. Of the ones that have already been discovered, we could look at differential timing attacks (which we've pretty much learned to handle) and differential power analysis, which can be dealt by careful hardware design if nothing else. That's not so say someone won't find another "channel" to attack, but at least all the ones we already know about are either irrelevant, or can be made very difficult to exploit. -- Jerry _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Jerry Leichter <leichter@lrw.com> writes: >I'd say the ability to safely do crypto on shared hardware is very much an >open question at this point. Thus the restatement of Law #1 of the 10 Immutable Laws of Security, "If a bad guy can persuade you to run his program on your computer, it’s not your computer any more", which in its inverse form is the Immutable Law of Cloud Computing Security: "If a bad guy can persuade you to run your program on his computer, it’s not your program any more". Peter. _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
> >> I'd say the ability to safely do crypto on shared hardware is very much an >> open question at this point. > > Thus the restatement of Law #1 of the 10 Immutable Laws of Security, "If a bad > guy can persuade you to run his program on your computer, it’s not your > computer any more", which in its inverse form is the Immutable Law of Cloud > Computing Security: > > "If a bad guy can persuade you to run your program on his computer, it’s not > your program any more". There is an irony to this: Allegedly when DES was first proposed, the NSA was skeptical of software implementations of cryptographic algorithms. In fact, they influenced DES to make it harder to implement in software (the initial and final permutations). Of course, without the ability to use this stuff in software, the public development of cryptography would have been completely stunted. So the NSA clearly had other motives for pushing for hardware-only implementations. We of course don't know exactly NSA does these days, though it is interesting that the FIPS standards for cryptography, for example, are clearly written with an eye to hardware implementations. While the crypto hackers may not want to admit it, we appear at least for the reasonably foreseeable future to be at the end of the road for practical asymmetric cryptographic algorithm development: Nothing is likely to supersede AES in widespread practical use. We're probably converging on SHA2, with a gradual move to SHA3, for hash functions. Which makes putting those directly into hardware sensible. There's a lot of paranoia around trusting the hardware implementations, though if you do a careful analysis of realistic attack models, you're probably safer using the hardware implementations than relying on software - especially when you're using shared infrastructure (if, as you point out, security on shared infrastructure is a particularly meaningful concept anyway - though economics keeps pushing us toward it). An interesting question I haven't seen specifically attacked: Are there usable side-channel attacks against software random number generators? (Particularly the algorithms actually in use in modern systems.) These have seen many algorithmic attacks and defenses, but I don't recall anything like, say, a DPA attack against the stirring algorithms. -- Jerry _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
On Fri, 25 Jan 2019, Jerry Leichter wrote: > The issue here is side-channel attacks. If there are no channels between the > crypto processing and code controlled by attackers, there is no attack > against the crypto processing. The problem, of course, is that "channels" is > extremely open-ended. But a co-processor with its own private memory > does limit the possible attacks. Of course using a co-processor does limit attacks, but the isolated co-processor doing "safe crypto processing" has to be authorized to do something valuable - like a signature - by the CPU on which the attacker controlled code is running also. So even if there is less of a risk of leaking key material, we're miles away from "the ability to safely do[ing] crypto on shared hardware", which was my point. -ralf _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
89.2 MB 3,873 messages
Last sync: 15 July 2019 22:44

Move Messages

Save

Apply Labels


Warning: Unknown: write failed: No space left on device (28) in Unknown on line 0

Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php/sessions) in Unknown on line 0