[Mono-dev] Should we replace MemoryStream?
Avery Pennarun
apenwarr at gmail.com
Tue Nov 10 12:56:46 EST 2009
On Tue, Nov 10, 2009 at 12:42 PM, Robert Jordan <robertj at gmx.net> wrote:
> An algorithm based on a MemoryStream implemented with chunks will
> perform better in average. I fully agree with that.
>
> The problem is that one method (GetBuffer) *will be* unexpected
> slower,
I just don't believe this is true. I think we're moving the slowness
from "add to buffer" into GetBuffer(). However, it is not
*additional* slowness. It is simply displaced slowness, and it's
potentially *less* slowness overall.
I'm not sure I can imagine a program that would be negatively affected
by this. Doesn't the gc cause random slowness sometimes anyway?
> and another one, much harder to fix: it is allowed to change
> the buffer even before the stream has been closed. This means that
> after every GetBuffer call, the implementation must behave differently
> because it must somehow deal with a changed underlying buffer.
I don't think this is a problem either. Since you're now using the
returned buffer as your one-and-only chunk, you can use it just as you
always would. If someone then pushes so much new data into the stream
that you would exceed the buffer size, you would have to do what you
would do in the non-chunked implementation; either a) reject it, or b)
not guarantee that it ends up in the array from the earlier
GetBuffer(). I'm not sure which is the correct behaviour, but both
are easily implemented in the chunked implementation too, particularly
since it has to support user-supplied fixed-length buffers anyhow.
Perhaps I'm missing something...
Avery
More information about the Mono-devel-list
mailing list