[Mono-dev] Dictionary`2: optimized and serialization-compatible with MS.net
Juraj Skripsky
js at hotfeet.ch
Mon Jun 11 08:43:47 EDT 2007
On Fri, 2007-06-08 at 16:29 +0200, Paolo Molaro wrote:
> > 1) Do not store the hashcode: much slower (10-80%).
> > The Hashtable class stores the hashcode too...
> > Recalculating the hashcode on every access makes no sense.
>
> This doesn't look like what I suggested, since GetHashCode()
> would be called more often only in the case of rehash and that should
> not cause a 80% slowdown.
> Could you post for example the new get implementation? There should be
> only a GetHashCode() call for the key, which you need already.
If we don't store the hashcodes of keys inserted into the hashtable, we
have to call GetHashCode() in most methods (containing the following
code or variations thereof), not just in the case of rehashing:
------
// get first item of linked list corresponding to given key
int hashCode = (hcp.GetHashCode (key) & Int32.MaxValue);
int cur = table [hashCode % table.Length];
// walk linked list until right slot is found or end is reached
while (cur != NO_SLOT) {
if (linkSlots [cur].HashCode == hashCode &&
hcp.Equals (dataSlots [cur].Key, key))
return dataSlots [cur].Value;
cur = linkSlots [cur].Next;
}
------
"linkSlots [cur].HashCode == hashCode" would have to be replaced with
"linkSlots [cur].Key.GetHashCode() == hashCode"...
Or am I missing something here?
> > 2) Separate key and value arrays: no difference (with current GC).
> > If the separation will reduce the pressure on future GCs (sgen?),
> > then it makes a lot of sense.
>
> I guess it would be significant only with big hash tables
> and it would show up even with the current GC: say you have
> a Hashtable<int,object>. With separate arrays they key array
> is completely skipped and only the values array is scanned
> for references. With the unified array the current GC will have
> to scan all of it. A test case could be to create a few
> large hash-tables (no need to fill them, just force the capacity to
> 1 million entries) and keep them alive while you execute GC.Collect()
> several times. Note the total execution speed of the test
> with the split or unified array. The test should last
> at least 10 seconds. Increasing the size of the arrays
> should make the difference larger (specially if the split array fits
> into the cache while the unified doesn't).
I'll cook up a test case following your description and post the
results.
> The only downside of the split arrays are a slight increase
> of memory usage: 20 bytes on 32 bit systems. This (apart from the cache
> locality effects described above) is largely offset by the wins we'd
> have with keys and values with different alignment constraints
> (because the unified array entries would need to be aligned to
> the max alignment of key and value). Also, given the not-completely
> precise GC's issues with large arrays, having two smaller ones
> is likely to make its life easier.
>
> > 3) 0 as 'no slot' value in table: slightly faster (2-7%).
> > The change is trivial (i.e. add 1 when storing a value in table,
> > subtract 1 when reading from it) and gives us a slight speed-up.
>
> Nice.
>
> > If it where up to me, I would do the changes 2 and 3.
> > What do you think (especially about change 2)?
>
> Well, it's nice that with change 2 you found no regression, I think
> a specific test would be able to show the differences, so 2 and 3
> look viable and I'll still way for more data to decide on 1.
> Thanks!
>
> lupus
Thanks for your feedback!
I can't wait for the optimized implementation to be committed to svn. As
soon as that is done, it will be interesting to replace private/internal
hashtables in the class libs with Dictionaries (e.g. QuickSearch.shift
in System.Text.RegularExpressions) for the NET_2_0 profile and measure
the speedup.
Juraj
More information about the Mono-devel-list
mailing list