Module Memoize
source code
Memoizer
A metaclass implementation to count hits and misses of the computed
values that various methods cache in memory.
Use of this modules assumes that wrapped methods be coded to cache their
values in a consistent way. Here is an example of wrapping a method
that returns a computed value, with no input parameters:
memoizer_counters = [] # Memoization
memoizer_counters.append(SCons.Memoize.CountValue('foo')) # Memoization
def foo(self):
try: # Memoization
return self._memo['foo'] # Memoization
except KeyError: # Memoization
pass # Memoization
result = self.compute_foo_value()
self._memo['foo'] = result # Memoization
return result
Here is an example of wrapping a method that will return different values
based on one or more input arguments:
def _bar_key(self, argument): # Memoization
return argument # Memoization
memoizer_counters.append(SCons.Memoize.CountDict('bar', _bar_key)) # Memoization
def bar(self, argument):
memo_key = argument # Memoization
try: # Memoization
memo_dict = self._memo['bar'] # Memoization
except KeyError: # Memoization
memo_dict = {} # Memoization
self._memo['dict'] = memo_dict # Memoization
else: # Memoization
try: # Memoization
return memo_dict[memo_key] # Memoization
except KeyError: # Memoization
pass # Memoization
result = self.compute_bar_value(argument)
memo_dict[memo_key] = result # Memoization
return result
At one point we avoided replicating this sort of logic in all the methods
by putting it right into this module, but we've moved away from that at
present (see the "Historical Note," below.).
Deciding what to cache is tricky, because different configurations
can have radically different performance tradeoffs, and because the
tradeoffs involved are often so non-obvious. Consequently, deciding
whether or not to cache a given method will likely be more of an art than
a science, but should still be based on available data from this module.
Here are some VERY GENERAL guidelines about deciding whether or not to
cache return values from a method that's being called a lot:
-- The first question to ask is, "Can we change the calling code
so this method isn't called so often?" Sometimes this can be
done by changing the algorithm. Sometimes the *caller* should
be memoized, not the method you're looking at.
-- The memoized function should be timed with multiple configurations
to make sure it doesn't inadvertently slow down some other
configuration.
-- When memoizing values based on a dictionary key composed of
input arguments, you don't need to use all of the arguments
if some of them don't affect the return values.
Historical Note: The initial Memoizer implementation actually handled
the caching of values for the wrapped methods, based on a set of generic
algorithms for computing hashable values based on the method's arguments.
This collected caching logic nicely, but had two drawbacks:
Running arguments through a generic key-conversion mechanism is slower
(and less flexible) than just coding these things directly. Since the
methods that need memoized values are generally performance-critical,
slowing them down in order to collect the logic isn't the right
tradeoff.
Use of the memoizer really obscured what was being called, because
all the memoized methods were wrapped with re-used generic methods.
This made it more difficult, for example, to use the Python profiler
to figure out how to optimize the underlying methods.
|
Counter
Base class for counting memoization hits and misses.
|
|
CountValue
A counter class for simple, atomic memoized values.
|
|
CountDict
A counter class for memoized values stored in a dictionary, with
keys based on the method's input arguments.
|
|
Memoizer
Object which performs caching of method calls for its 'primary'
instance.
|
|
Memoized_Metaclass
|
|
__revision__ = ' src/engine/SCons/Memoize.py 2014/09/27 12:51: ...
|
|
__doc__ = """Memoi...
|
|
use_memoizer = None
hash(x)
|
|
CounterList = []
|
|
__package__ = ' SCons '
|
__revision__
- Value:
' src/engine/SCons/Memoize.py 2014/09/27 12:51:43 garyo '
|
|
__doc__
- Value:
"""Memoizer
A metaclass implementation to count hits and misses of the computed
values that various methods cache in memory.
Use of this modules assumes that wrapped methods be coded to cache the
ir
values in a consistent way. Here is an example of wrapping a method
...
|
|