[Mono-bugs] [Bug 573682] segtaults when encoding/decoding non-UTF8 strings

bugzilla_noreply at novell.com bugzilla_noreply at novell.com
Wed Jan 27 12:36:13 EST 2010


http://bugzilla.novell.com/show_bug.cgi?id=573682

http://bugzilla.novell.com/show_bug.cgi?id=573682#c10


--- Comment #10 from Ted Unangst <tedu at fogcreek.com> 2010-01-27 17:36:13 UTC ---
Hmm, I can still reproduce using both 1250 and 1252 encodings with 2.6.1 built
from source on a linux 32 machine.  But I note that the sample must be built
with gmcs, not mcs, or it doesn't crash.  By removing the fallback code
entirely I resolved both calls.  At least for us, the fallback code is messing
things up.

I think the continue is not the entire story.  As I read the code, even with
the misplaced continue, failure to decrement charindex will result in
overreading the input array, but in that case we don't write to bytes (nor
increment byteIndex), and the crash we're looking at is from bogus writes, not
just out of bounds reads.

In our original app, the bug manifested as a string with a Length field of
1.9GB, but no crashes.  The test app loops and doubles the string in an attempt
to force dramatic corruption, but you may need to vary the input string or loop
to trigger.

-- 
Configure bugmail: http://bugzilla.novell.com/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the QA contact for the bug.
You are the assignee for the bug.


More information about the mono-bugs mailing list