Can your average very cheap and outrageously fast (non-cryptographic) hash function on a string of bytes or machine words be made better-behaved by adding or XORing in i, the offset to or index number of the current machine word being fetched, into the hash algorithm as you go? That is
for ( size_t i=0; i < length; i++)
{
hash = hash + i; /* extra step */
hash = some_fn( hash, data[ i ] );
}
or, more generally,
for ( size_t i=0; i < length; i++)
{
hash = some_fn( hash, data[ i ], i );
}
instead of
for ( size_t i=0; i < length; i++)
{
hash = some_fn( hash, data[ i ] );
}
Since the first case is only something that has an effect more like XORing /adding in a long constant string, I am doubtful that it is any better than simply XORing / adding the total length, which of course is vastly cheaper. The general case 2, is a completely different matter. Am I correct in my thinking?