[Mono-dev] Patch to boost speed of UnicodeEncoding
ClassDevelopment at A-SoftTech.com
Thu Mar 16 15:11:21 EST 2006
Hi Zac, Hi Kornél,
I've been working quite some time on improving the existing String class a
long time ago (about 2-3 years), but as I got a new job back then never had
the time to finish anything. My findings back then were that purely managed
implementations basically always outperformed the internalcalls (And I guess
the JIT is now even more evolved than it was 3 years ago).
However as I sayed it was never finished and contains bugs. Moreover it
doesn't care at all for alignment issues.
If anyone want's to look at it - I attach my String-Testing class. You'll
find lots of different tries to optimize the Methods. But beware, the code
is in horrible shape, far from being usable.
Some optimizations use specific string-domain knowlege (like "equals"
testing first for the first char and after that from the end of the string)
My conclusion was: We should have a few managed functions to do the work
(MemoryCopy, MemoryCompare, possibly for Byte* and Char*). They should be
managed so that optimizers and optimizing compilers are able to do
optimizations even on IL level. Whenever possible the JIT should replace
these at runtime (provided they aren't optimized away) with
architecture-specific assembly versions with the managed version as fallback
Here are some findings from microbenchmarks made back then (the first number
is always the time in milliseconds for the existing unmanaged implementation
(internalcall), the second for a tested managed implementation, the Number
in () is the length of the string tested):
Some Methods still need internalcalls to create new Strings, but were still
faster than the native implementations (The optimum would be to internalize
CopyTo (000): 810 -> 381
CopyTo (010): 832 -> 441
CopyTo (100): 1352 -> 881
CopyTo (512): 3395 -> 3014
ToCharArray (000): 5067 -> 4466
ToCharArray (002): 5317 -> 4857
ToCharArray (015): 8041 -> 7691
ToCharArray (960): 2915 -> 2894
ToCharArray (with parameters): Similar to above
Trim (): 6930 -> 6760
Trim (custom search Chars): 10596 -> 9413
Trim (default search Chars): 10455 -> 7210
Trim (no trimmed chars, long string): 1893 -> 661
Trim (no trimmed chars, short string): 1893 -> 631
Replace (004 - one replace): 37264 -> 3135
Replace (004 - nothing to replace): 3735 -> 310
Replace (961 - everything replaced): 2584 -> 501
Replace (961 - only last char replaced): 2463 -> 481
Split (default split Chars, long string, lots splitted): 42421 -> 8523
Split (custom split Chars, long string, non found): 2944 -> 2263
Split (custom split Chars, long string, lots found): 22062 -> 7330
Split (default split Chars, short string, non found): 2093 -> 761
Split (default split Chars, short string, nearly only splitChars): 8002 ->
IndexOf (17): 1132 -> 791
IndexOf (2162): 10576 -> 7862
LastIndexOf (similar to above)
IndexOfAny (long string, nothing found): 25867 -> 2984 (Break even bei ca.
LastIndexOfAny (similar to above)
PadLeft/ PadRight: slightly slower that current (Should get faster once
optimized CharCopy is available): 1012 -> 1031
Remove: slightly slower that current (Should get faster once optimized
CharCopy is available): 2153 -> 2283
If somebody is interesting in picking this up I might be able to help a
----- Original Message -----
From: "Zac Bowling" <zac at zacbowling.com>
To: "Kornél Pál" <kornelpal at hotmail.com>
Cc: <mono-devel-list at lists.ximian.com>
Sent: Sunday, March 12, 2006 12:33 AM
Subject: Re: [Mono-dev] Patch to boost speed of UnicodeEncoding
That's interesting. Thanks for that. That gives me a another
perspective on it.
One thing though. I think it should be "BitConverter.IsLittleEndian ==
bigEndian" rather then not "!=". If the System is little Endian and the
conversions is not big Endian, then it would be the same. That confused
me at first too and your test didn't catch that because it reverses
what it do each time.
I didn't even think of using buffer.blockcopy here. That's pretty cool.
I was going off the code inside String.cs which uses a set of functions
called "memcpy". I need to do some research on why we don't use
BlockCopyInternal inside String.cs (maybe String.cs hasn't been looked
at in a while).
I'm going to setup up a profiler and some testing/sanity checking
scripts to start testing the speed advantages we get from mixing and
matching all these methods to see how we can tweak everything there.
Might even find a way to tweak the base string class if
InternalBlockCopy is faster then that managed memcpy function (which
would speed up almost everything in Mono that uses String.Substring,
String.Clone, String.Concat, etc...)
Thanks a bunch...
Time for double byte char and pointer logic fun! -- Zac Bowling
----- Message from kornelpal at hotmail.com ---------
Date: Sat, 11 Mar 2006 14:40:33 +0100
From: Kornél Pál <kornelpal at hotmail.com>
Reply-To: Kornél Pál <kornelpal at hotmail.com>
Subject: Re: [Mono-dev] Patch to boost speed of UnicodeEncoding
To: Zac Bowling <zac at zacbowling.com>, mono-devel-list at lists.ximian.com
> I think doing something like in the attached draft is faster. No new
> object is created. Arrays are accessed using pointers. And I think there
> no use to use a more complicated conversion method for short strings.
> This draft is very unsafe. It lacks of any checks and does not perform any
> special character or byte sequence handling.
> Note that I haven't done any tests to determine whether using byte pointer
> or using int pointers and shift operations to swap bytes is faster. But
> mixing bytes an ints results in two different code for big and little
> encodings while byte swapping can be performed using a single code when
> using only bytes or only ints.
> ----- Original Message -----
> From: "Zac Bowling" <zac at zacbowling.com>
> To: <mono-devel-list at lists.ximian.com>
> Sent: Saturday, March 11, 2006 1:09 PM
> Subject: [Mono-dev] Patch to boost speed of UnicodeEncoding
>> Alright guys,
>> Here is a cool (and still incomplete) patch to speed up
>> System.Text.UnicodeEncoding I'm working on. Just want to make sure this
>> is sane before I finish it by getting everyone's opinions.
>> I was tinkering with this idea. Since the strings are stored in memory
>> as UTF-16 (UCS 2) already, the idea of converting them with like we do
>> with a while loop, one char at a time, was really bothering me.
>> Directly copying whats in memory seems a little bit more sane. I don't
>> want to make it sound that easy because it isn't (and maybe why it
>> wasn't done like this when it was first written). :-P
>> The biggest problem is that UnicodeEncoding can be bigEndian or
>> littleEndian so I went through the logic and testing to see if the
>> system's endian (with 'BitConverter.IsLittleEndian') matched the endian
>> of the current Encoding class (using the 'bigEndian' bool field) and if
>> it doesn't then use the same method we already use. (Is that right? Is
>> the internal version of utf-16 we use in our strings specific to the
>> endian of the system? I assumed yes here but if it's not, it's a simple
>> change to remark it out.)
>> Also since the memcpy function in String.cs uses some unsafe logic,
>> taking a possible hit for that with a really small string seems silly,
>> so I put in an condition that if the char count is less then or equal
>> to 10 chars, then use the existing method. (Maybe 10 chars should be
>> adjusted or is that idea silly?)
>> Below is an unfinished sample of my idea. Of course I will have to
>> reverse this logic for GetChars() (instead of GetBytes below) and
>> finish the overloads in System.Text.UnicodeEncoding's GetBytes and
>> GetChars methods but I want to see what everything thinks.
----- End message from kornelpal at hotmail.com -----
Mono-devel-list mailing list
Mono-devel-list at lists.ximian.com
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 69614 bytes
Desc: not available
Url : http://lists.ximian.com/pipermail/mono-devel-list/attachments/20060316/f385242b/attachment.obj
More information about the Mono-devel-list