[Mono-dev] [PATCH] SecureString implementation

Sebastien Pouliot sebastien.pouliot at gmail.com
Fri Dec 9 19:40:27 EST 2005


Hello,

On Fri, 2005-12-09 at 18:13 -0500, Ben Maurer wrote:
> I think I am confused about the design of ProtectedMemory, can you
> correct the errors I make in the following reply? Mostly this because I
> am curious about the API now ;-).

Sure. Always happy to talk about security :)

> On Fri, 2005-12-09 at 17:39 -0500, Sebastien Pouliot wrote:
> > Hello Ben,
> > 
> > On Fri, 2005-12-09 at 16:28 -0500, Ben Maurer wrote:
> > > Why does this need to be implemented in unmanaged code? The win32 apis
> > > could be pinvoked, and we already have an AES implementation in managed
> > > code. 
> > 
> > Oh, believe me I have a *much* higher preference to managed code (and I
> > did try it) but in the end the choice wasn't about "how", it was about
> > "why".
> > 
> > The use cases for ProtectedMemory (and SecureString is very similar) are
> > very different from the "general" use cases of cryptography. I won't get
> > in long (and potentially boring for some) details (there's lot of docs
> > on MSDN for interested people) but PM and SS are mainly used to limit
> > the window of opportunity to access some data during software execution.
> 
> The primary goal of ProtectedMemory (or SecureString) seems to be:
>      1. To prevent the protected value from being exposed should it ever
>         be swapped to disk (and the computer rebooted into an OS that
>         could read the swap file)

Prevent? No.

It's less likely to happen (how much less depends on how it is used) but
it can't prevent it - there's too much other things to consider outside
DPAPI. We could say it's like an "widely varying indirect advantage".

>      2. To reduce the window for for a process with access to the
>         program's address space to view the data (what is an example of
>         where somebody would have access to the programs address space
>         but can't just call the DAPI code to decrypt the string? I don't
>         think I understand this case)

DPAPI use different keys for each process, user (logon) and a global one
(cross-process). See MemoryProtectionScope in fx 2.0 for more details.

SecureString use (in Mono and most likely in MS implementation) the
process key. So only code executing in the same process can use DPAPI to
decrypt the value.

If someone runs into your address space then all bets are off (and that
include encryption, digital signatures ...).

>      3. (SecureString) Allow untrusted APIs to be given a string without
>         being able to read it. For example, I could give somebody a
>         password for a web service and know that they'd never be able to
>         get the password and send it to a place I didn't want it to go.

Not really (or not only) untrusted API. It's common to ask, and keep,
credentials (or other sensitive information) for a very long time in an
application. It's also hard to predict how those information could be
disclosed if case the application fails[*]. Keeping them encrypted, but
accessible, is an easy way to "decrease" this problem (without
re-designing your application).

Also .NET has very special rules regarding strings which make them even
harder (well almost impossible) to clear properly once they contained
important data. So providing a "string" class not bound by those rules
is also very helpful.

Sadly the SecureString isn't being always available to be used in many
API - including new ones in 2.0 where byte[] overloads were added for
passwords :(

[*] it's a major, yet little known, fact that "application failing" (not
to confuse with "security failing" ;-) is a very important aspect of
security. That's in large part due to how easy it can be to make
software fail...

> Right?

Mostly ;-)

> > There are some reasons this cannot be build on top of the managed
> > implementation. The biggest one, IMHO, is that the symmetric algorithms
> > in .NET have a design flaw[1]: the secret key is publicly exposed (and
> > some would say it's "by design" ;-). This it's not a big deal for the
> > most common usage where you already supply, hence know, the secret key.
> > 
> > However the lack of encapsulation of the key (like provided in
> > CryptoAPI, and many other toolkits, using opaque handles) makes it
> > "hard" to share the use of a common key without sharing the key itself.
> > By "hard" I mean it's "too easy to share" so it destroy the real
> > advantage of using PM/SS (as the window of opportunity on the secret key
> > would be larger than on the protected data).
> 
> How does having encapsulated in the runtime add additional security?
> Somebody who has access to reflect on private APIs (such as the secret
> key for ProtectedMemory/SecureString) should be able to get this data
> from the runtime just as easily (well, they might need some more hackery
> as the C library obviously isn't reflectable.

Right, in this case reflection wouldn't be enough to get the key. It may
seems a simple step (for you) to go the more "hackery" step but it may
be enough to stop a lot of people.

The next logical step would be to "outsource" this service to the OS
(like we do in Windows) as the OS, having more control, can do a lot of
things more safely than applications. But even then it won't be
perfect...

> But it seems to be protection by obscurity rather than real protection).

No. Obscurity would be hide the solution (e.g. an hardcoded key in the
runtime) or the problem existence (shhh, maybe no one will notice ;-).

Admitting limits (in this case it means "reducing the window of
opportunity" and "limiting the access to the shared key") isn't
obscurity. We clearly know that encryption or "insert your buzzword
here" doesn't solve every problem, yet we do offer encryption (and other
buzzwords ;-)

Security is a pile of trade off. Nothing is 100% safe but by using many
"tricks" we can cover a lot of it. That's still nowhere near 100% but
it's enough to make (some) software usable ;-)

> > Could it be implemented differently ? Maybe.
> > 
> > ProtectedData is very similar but has some different rules (e.g.
> > longer-term) and it's API makes it easy to use asymmetric cryptography
> > (which doesn't have the design flaw [1]) so it was fully implemented in
> > managed code. However a quick look at the PM API shows, without a doubt,
> > that the implementation is based on a symmetric block cipher.
> > 
> > Could I modify the managed AES implementation to achieve this ? Probably
> > for a good chunk of the current code/features. Hardly for the other
> > MemoryProtectionScope options.
> > 
> > 
> > [1] The asymmetric algorithms have the "opaque" concept (using the
> > CspParameters class) which can (this is really implementation dependent)
> > allow keypairs to be used without disclosing the private key (e.g. by
> > refusing to export it).
> 
> How is this opacity implemented?

This is implementation dependant. The important thing is that the design
allows it (while the symmetric API design prevents it).

Software implementations aren't very good at opacity. This is why you'll
(most probably) never find a software only implementation rated more
higher than level 2.

However hardware implementation (e.g. smartcard, SSL accelerator, HSM
used in PKI CA...) are much better at opacity. The design of asymmetric
algorithms in the .NET framework can easily be used in this case, while
the design of the symmetric algorithms cannot (well you're still free to
implement your own design if you like).

>  If I have the ability to read a random
> address in memory, can't I (with some level of reverse engineering) find
> the shared key 
well private key in the asymmetric case.

> with no more effort than I needed to gain access to the
> secure string in the first place? 

Yes, it's just harder. Sometimes it becomes just hard enough, or the
condition aren't right, that it can't be done (and we gain a little ;-).

> How is the shared key protected from being swapped to disk?

In Mono, it's not. In fact this isn't something that should be part of
Mono itself, but part of the operating system and exposed thru Mono.

In general it depends. I designed, for my previous employer, a kernel
mode driver (windows) which sole purpose was keeping secrets safe,
including not swapping them to disk. It's not that hard but it impose
very strict application requirements to be effective (which would be
very hard to "impose" on third parties).
-- 
Sebastien Pouliot
email: sebastien at ximian.com
blog: http://pages.infinit.net/ctech/




More information about the Mono-devel-list mailing list