181

For caching purposes I need to generate a cache key from GET arguments which are present in a dict.

Currently I'm using sha1(repr(sorted(my_dict.items()))) (sha1() is a convenience method that uses hashlib internally) but I'm curious if there's a better way.

ThiefMaster
  • 285,213
  • 77
  • 557
  • 610
  • 4
    this might not work with nested dict. shortest solution is to use json.dumps(my_dict, sort_keys=True) instead, which will recurse into dict values. – Andrey Fedorov Apr 08 '14 at 19:15
  • 2
    FYI re: dumps, http://stackoverflow.com/a/12739361/1082367 says "The output from pickle is not guaranteed to be canonical for similar reasons to dict and set order being non-deterministic. Don't use pickle or pprint or repr for hashing." – Matthew Cornell Dec 16 '14 at 12:49
  • sort the dict keys, not the items, i would also send the keys to the hash function. – nyuwec May 26 '16 at 12:59
  • 2
    Interesting backstory about hashing mutable data structures (like dictionaries): https://www.python.org/dev/peps/pep-0351/ was proposed to allow arbitrarily freezing objects, but rejected. For rationale, see this thread in python-dev: https://mail.python.org/pipermail/python-dev/2006-February/060793.html – FluxLemur Mar 29 '18 at 16:47
  • 1
    If your data is json format, and you want semantically invariant hashing, checkout https://github.com/schollii/sandals/blob/master/json_sem_hash.py. It works on nested structures (of course, since json), and does not depend on internals of dict like preserved order (which has evolved over the lifetime of python), and will give the same hash if two data structures are semantically the same (like `{'a': 1, 'b':2}` is semantically the same as `{'b':2, 'a':1}`). I haven't used it on anything too complicated yet so YMMV but feedback welcome. – Oliver Jul 10 '20 at 15:45

14 Answers14

153

Using sorted(d.items()) isn't enough to get us a stable repr. Some of the values in d could be dictionaries too, and their keys will still come out in an arbitrary order. As long as all the keys are strings, I prefer to use:

json.dumps(d, sort_keys=True)

That said, if the hashes need to be stable across different machines or Python versions, I'm not certain that this is bulletproof. You might want to add the separators and ensure_ascii arguments to protect yourself from any changes to the defaults there. I'd appreciate comments.

Jack O'Connor
  • 8,021
  • 3
  • 40
  • 42
  • 1
    This seems like the best solution, but could you expound on why you think separators and ensure_ascii might be useful? – Andrey Fedorov Apr 08 '14 at 19:13
  • 6
    This is just being paranoid, but JSON allow most characters to show up in strings without any literal escaping, so the encoder gets to make some choices about whether to escape characters or just pass them through. The risk then is that different versions (or future versions) of the encoder could make different escaping choices by default, and then your program would compute different hash values for the same dictionary in different environments. The `ensure_ascii` argument would protect against this entirely hypothetical problem. – Jack O'Connor Apr 09 '14 at 04:31
  • 1
    Kind of unrelated, but you also need to worry about your encoder's unicode escaping if you plan on `eval`ing the JSON output as literal JS. Fun times. http://timelessrepo.com/json-isnt-a-javascript-subset – Jack O'Connor Apr 09 '14 at 04:33
  • 1
    thats really neat! one line and you get nested dicts sorted too :) – Tommaso Barbugli Jul 29 '14 at 16:50
  • 5
    I tested the performance of this with different dataset, it's much much faster than `make_hash`. https://gist.github.com/charlax/b8731de51d2ea86c6eb9 – charlax Sep 18 '14 at 22:33
  • 3
    @charlax ujson doesn't guarantee the order of the dict pairs, so it's not safe to do that – arthurprs Jul 03 '15 at 12:48
  • 11
    This solution only works as long as all keys are strings, e.g. json.dumps({'a': {(0, 5): 5, 1: 3}}) fails. – kadee Jun 01 '16 at 11:01
  • 1
    @JackO'Connor Thankfully, `ensure_ascii` defaults to `True` at least as far back as 2.6. – kkurian Jun 07 '16 at 21:03
  • 2
    Some datatype are not json serializable like datetime.datetime. – Lorenzo Belli May 08 '17 at 11:19
  • 6
    @LorenzoBelli, you can overcome that by adding `default=str` to the `dumps` command. Seems to work nicely. – mlissner May 04 '18 at 23:10
  • 2
    @mlissner Then you have two non-equal dicts that generate the same hash (one with integer keys, one with string keys) – DylanYoung Nov 20 '19 at 19:38
  • It's understandable that this solution is faster with recursive dict/similar because json is implemented is C, but because of the redundant work it can be expected that a recursive-dict-hashing module implemented in C would be faster than this. – user202729 Jan 11 '21 at 04:30
121

If your dictionary is not nested, you could make a frozenset with the dict's items and use hash():

hash(frozenset(my_dict.items()))

This is much less computationally intensive than generating the JSON string or representation of the dictionary.

UPDATE: Please see the comments below, why this approach might not produce a stable result.

mit
  • 10,441
  • 7
  • 43
  • 71
Imran
  • 76,055
  • 23
  • 93
  • 124
  • 9
    This didn't work for me with a nested dictionary. I haven't tried the solution below (too complicated). The OP's solution works perfectly fine. I substituted sha1 with hash to save an import. – spatel Jan 18 '12 at 07:51
  • You can also just use hash(tuple(my_dict.items())) to save a few characters for non-nested items. – Ceasar Bautista Feb 05 '12 at 04:01
  • 10
    @Ceaser That won't work because tuple implies ordering but dict items are unordered. frozenset is better. – Antimony Jul 30 '12 at 11:55
  • 33
    Beware of the built-in hash if something needs to be consistent across different machines. Implementations of python on cloud platforms like Heroku and GAE will return different values for hash() on different instances making it useless for anything that must be shared between two or more "machines" (dynos in the case of heroku) – B Robster Apr 22 '15 at 22:40
  • 6
    It might be interesting the `hash()` function does not produce a stable output. This means that, given the same input, it returns different results with different instances of the same python interpreter. To me, it looks like some sort of seed value is generated every time the interpreter is started. – Hermann Schachner May 28 '15 at 11:03
  • 7
    expected. the seed is introduced for security reason as far as I remember to add some kind of memory randomization. So you cannot expect the hash to be the same between two python processes – Nikokrock May 29 '15 at 13:03
  • this solution does not solve the general problem of get a hash of a dictionary – Daniel777 Jul 08 '15 at 20:22
  • If your dictionary has just one key, `hash(repr(my_dict))` will work also. – fiatjaf Aug 30 '16 at 20:04
  • 2
    @HermannSchachner That's correct - random seed for hash() was introduced in Python 3.3 for security reasons: http://stackoverflow.com/a/27522708/196206 That renders this answer unusable (except if you need hash stability only during a single interpreter lifetime). – Messa Mar 31 '17 at 12:11
  • Note that Python's hash() is not a cryptographic hash function and thus _not collision resistant_. When a hash matches, you will still have to compare that the value associated with the hash is the expected one. – pcworld Nov 07 '19 at 23:43
64

EDIT: If all your keys are strings, then before continuing to read this answer, please see Jack O'Connor's significantly simpler (and faster) solution (which also works for hashing nested dictionaries).

Although an answer has been accepted, the title of the question is "Hashing a python dictionary", and the answer is incomplete as regards that title. (As regards the body of the question, the answer is complete.)

Nested Dictionaries

If one searches Stack Overflow for how to hash a dictionary, one might stumble upon this aptly titled question, and leave unsatisfied if one is attempting to hash multiply nested dictionaries. The answer above won't work in this case, and you'll have to implement some sort of recursive mechanism to retrieve the hash.

Here is one such mechanism:

import copy

def make_hash(o):

  """
  Makes a hash from a dictionary, list, tuple or set to any level, that contains
  only other hashable types (including any lists, tuples, sets, and
  dictionaries).
  """

  if isinstance(o, (set, tuple, list)):

    return tuple([make_hash(e) for e in o])    

  elif not isinstance(o, dict):

    return hash(o)

  new_o = copy.deepcopy(o)
  for k, v in new_o.items():
    new_o[k] = make_hash(v)

  return hash(tuple(frozenset(sorted(new_o.items()))))

Bonus: Hashing Objects and Classes

The hash() function works great when you hash classes or instances. However, here is one issue I found with hash, as regards objects:

class Foo(object): pass
foo = Foo()
print (hash(foo)) # 1209812346789
foo.a = 1
print (hash(foo)) # 1209812346789

The hash is the same, even after I've altered foo. This is because the identity of foo hasn't changed, so the hash is the same. If you want foo to hash differently depending on its current definition, the solution is to hash off whatever is actually changing. In this case, the __dict__ attribute:

class Foo(object): pass
foo = Foo()
print (make_hash(foo.__dict__)) # 1209812346789
foo.a = 1
print (make_hash(foo.__dict__)) # -78956430974785

Alas, when you attempt to do the same thing with the class itself:

print (make_hash(Foo.__dict__)) # TypeError: unhashable type: 'dict_proxy'

The class __dict__ property is not a normal dictionary:

print (type(Foo.__dict__)) # type <'dict_proxy'>

Here is a similar mechanism as previous that will handle classes appropriately:

import copy

DictProxyType = type(object.__dict__)

def make_hash(o):

  """
  Makes a hash from a dictionary, list, tuple or set to any level, that 
  contains only other hashable types (including any lists, tuples, sets, and
  dictionaries). In the case where other kinds of objects (like classes) need 
  to be hashed, pass in a collection of object attributes that are pertinent. 
  For example, a class can be hashed in this fashion:

    make_hash([cls.__dict__, cls.__name__])

  A function can be hashed like so:

    make_hash([fn.__dict__, fn.__code__])
  """

  if type(o) == DictProxyType:
    o2 = {}
    for k, v in o.items():
      if not k.startswith("__"):
        o2[k] = v
    o = o2  

  if isinstance(o, (set, tuple, list)):

    return tuple([make_hash(e) for e in o])    

  elif not isinstance(o, dict):

    return hash(o)

  new_o = copy.deepcopy(o)
  for k, v in new_o.items():
    new_o[k] = make_hash(v)

  return hash(tuple(frozenset(sorted(new_o.items()))))

You can use this to return a hash tuple of however many elements you'd like:

# -7666086133114527897
print (make_hash(func.__code__))

# (-7666086133114527897, 3527539)
print (make_hash([func.__code__, func.__dict__]))

# (-7666086133114527897, 3527539, -509551383349783210)
print (make_hash([func.__code__, func.__dict__, func.__name__]))

NOTE: all of the above code assumes Python 3.x. Did not test in earlier versions, although I assume make_hash() will work in, say, 2.7.2. As far as making the examples work, I do know that

func.__code__ 

should be replaced with

func.func_code
MarredCheese
  • 9,495
  • 5
  • 59
  • 63
jomido
  • 1,058
  • 12
  • 17
  • isinstance takes a sequence for the second argument, so isinstance(o, (set, tuple, list)) would work. – Xealot Feb 13 '13 at 01:42
  • thanks for making me realize frozenset could consistently hash querystring parameters :) – Xealot Feb 14 '13 at 14:26
  • 1
    The items need to be sorted in order to create the same hash if the dict item order is different but the key values aren't -> return hash(tuple(frozenset(sorted(new_o.items())))) – Bas Koopmans Oct 28 '13 at 13:11
  • Nice! I also added a call to `hash` around lists and tuples. Otherwise it takes my lists of integers that happen to be values in my dictionary, and returns back lists of hashes, which is not what I want. – Sergey Orshanskiy Jan 07 '15 at 23:09
  • A frozenset is an UNORDERED collection, so there is nothing to gain by sorting its inputs. Lists and tuples on the other hand are ORDERED collections ("sequences"), and so the hash value should be affected by the order of items within. You shouldn't sort them! – RobM Apr 21 '16 at 15:04
  • @RobM: ah! So it should be hash(tuple(sorted(frozenset(new_o.items())))). Comment? – jomido Apr 22 '16 at 19:15
  • Using `foo.__dict__` isn't a good idea; that includes everything, including attributes that aren't necessarily relevant to the state (cache, for example). `Foo` should [define its own `__hash__` method](https://docs.python.org/3/reference/datamodel.html#object.__hash__), and include only the values relevant for determining if two `Foo`s are the same object or not. – Izkata May 12 '17 at 17:16
  • 2
    In Python 3.7, this solution peforms worse than `hash(json.dumps(d, sort_keys=True))` by a factor of 18. Proof: https://pastebin.com/XDZGhNHs – kolypto May 27 '19 at 10:15
  • Hi! I've a question, is it really needed to `deepcopy` at each level of recursion? I don't think so. Removing that command the performance would improve a lot. – garciparedes Sep 16 '20 at 12:44
16

Here is a clearer solution.

def freeze(o):
  if isinstance(o,dict):
    return frozenset({ k:freeze(v) for k,v in o.items()}.items())

  if isinstance(o,list):
    return tuple([freeze(v) for v in o])

  return o


def make_hash(o):
    """
    makes a hash out of anything that contains only list,dict and hashable types including string and numeric types
    """
    return hash(freeze(o))  
smartnut007
  • 5,913
  • 5
  • 40
  • 50
  • 2
    If you change `if isinstance(o,list):` to `if isinstance(obj, (set, tuple, list)):` then this function can work on any object. – Peter Schorn Apr 12 '20 at 03:19
12

The code below avoids using the Python hash() function because it will not provide hashes that are consistent across restarts of Python (see hash function in Python 3.3 returns different results between sessions). make_hashable() will convert the object into nested tuples and make_hash_sha256() will also convert the repr() to a base64 encoded SHA256 hash.

import hashlib
import base64

def make_hash_sha256(o):
    hasher = hashlib.sha256()
    hasher.update(repr(make_hashable(o)).encode())
    return base64.b64encode(hasher.digest()).decode()

def make_hashable(o):
    if isinstance(o, (tuple, list)):
        return tuple((make_hashable(e) for e in o))

    if isinstance(o, dict):
        return tuple(sorted((k,make_hashable(v)) for k,v in o.items()))

    if isinstance(o, (set, frozenset)):
        return tuple(sorted(make_hashable(e) for e in o))

    return o

o = dict(x=1,b=2,c=[3,4,5],d={6,7})
print(make_hashable(o))
# (('b', 2), ('c', (3, 4, 5)), ('d', (6, 7)), ('x', 1))

print(make_hash_sha256(o))
# fyt/gK6D24H9Ugexw+g3lbqnKZ0JAcgtNW+rXIDeU2Y=
Community
  • 1
  • 1
Claudio Fahey
  • 590
  • 5
  • 7
  • 1
    `make_hash_sha256(((0,1),(2,3)))==make_hash_sha256({0:1,2:3})==make_hash_sha256({2:3,0:1})!=make_hash_sha256(((2,3),(0,1)))`. This isn't quite the solution I'm looking for, but it is a nice intermediate. I'm thinking of adding `type(o).__name__` to the beginning of each of the tuples to force differentiation. – Poik Sep 17 '17 at 20:01
  • If you want to sort the list as well : `tuple(sorted((make_hashable(e) for e in o)))` – Suraj Aug 13 '19 at 15:12
  • make_hash_sha256() - nice! – jtlz2 Oct 31 '19 at 19:04
  • 1
    @Suraj You shouldn't want to sort the list before hashing because lists which have their contents in different orders are definitely not the same thing. If the order of the items does not matter the problem is that you are using the wrong data structure. You should be using a set instead of a list. – scottclowe Nov 03 '19 at 02:11
  • @scottclowe That is very true. Thanks for adding that point. There are 2 scenarios where you'd still want a list (without specific ordering needs) - 1. List of repeating items. 2. When you've to use a JSON directly. As JSON doesn't support "set" representation. – Suraj Nov 04 '19 at 09:11
5

Updated from 2013 reply...

None of the above answers seem reliable to me. The reason is the use of items(). As far as I know, this comes out in a machine-dependent order.

How about this instead?

import hashlib

def dict_hash(the_dict, *ignore):
    if ignore:  # Sometimes you don't care about some items
        interesting = the_dict.copy()
        for item in ignore:
            if item in interesting:
                interesting.pop(item)
        the_dict = interesting
    result = hashlib.sha1(
        '%s' % sorted(the_dict.items())
    ).hexdigest()
    return result
Steve Yeago
  • 341
  • 3
  • 8
  • Why do you think it matters that `dict.items` does not return a predictably ordered list? `frozenset` takes care of that – glarrain Jul 11 '14 at 22:54
  • 2
    A set, by definition, is unordered. Thus the order in which objects are added is irrelevant. You do have to realize that the built-in function `hash` does not care about how the frozenset contents are printed or something like that. Test it in several machines and python versions and you'll see. – glarrain Jul 13 '14 at 03:38
  • Why do you use the extra hash() call in value = hash('%s::%s' % (value, type(value)))?? – RuiDo Jul 06 '16 at 10:12
4

While hash(frozenset(x.items()) and hash(tuple(sorted(x.items())) work, that's doing a lot of work allocating and copying all the key-value pairs. A hash function really should avoid a lot of memory allocation.

A little bit of math can help here. The problem with most hash functions is that they assume that order matters. To hash an unordered structure, you need a commutative operation. Multiplication doesn't work well as any element hashing to 0 means the whole product is 0. Bitwise & and | tend towards all 0's or 1's. There are two good candidates: addition and xor.

from functools import reduce
from operator import xor

class hashable(dict):
    def __hash__(self):
        return reduce(xor, map(hash, self.items()), 0)

    # Alternative
    def __hash__(self):
        return sum(map(hash, self.items()))

One point: xor works, in part, because dict guarantees keys are unique. And sum works because Python will bitwise truncate the results.

If you want to hash a multiset, sum is preferable. With xor, {a} would hash to the same value as {a, a, a} because x ^ x ^ x = x.

If you really need the guarantees that SHA makes, this won't work for you. But to use a dictionary in a set, this will work fine; Python containers are resiliant to some collisions, and the underlying hash functions are pretty good.

Ben
  • 645
  • 6
  • 15
3

To preserve key order, instead of hash(str(dictionary)) or hash(json.dumps(dictionary)) I would prefer quick-and-dirty solution:

from pprint import pformat
h = hash(pformat(dictionary))

It will work even for types like DateTime and more that are not JSON serializable.

shirk3y
  • 132
  • 3
  • 3
    Who guarantees that pformat or json always use the same order? – ThiefMaster Jan 30 '15 at 06:02
  • 1
    @ThiefMaster, "Changed in version 2.5: Dictionaries are sorted by key before the display is computed; before 2.5, a dictionary was sorted only if its display required more than one line, although that wasn’t documented."(https://docs.python.org/2/library/pprint.html) – Arel Jan 15 '16 at 16:21
  • 2
    This doesn't seem valid to me. The pprint modules and pformat are understood by the authors to be for display purposes and not serialization. Because of this, you shouldn't feel safe in assuming that pformat will always return a result that happens to work. – David Sanders Apr 05 '17 at 22:27
3

You could use the third-party frozendict module to freeze your dict and make it hashable.

from frozendict import frozendict
my_dict = frozendict(my_dict)

For handling nested objects, you could go with:

import collections.abc

def make_hashable(x):
    if isinstance(x, collections.abc.Hashable):
        return x
    elif isinstance(x, collections.abc.Sequence):
        return tuple(make_hashable(xi) for xi in x)
    elif isinstance(x, collections.abc.Set):
        return frozenset(make_hashable(xi) for xi in x)
    elif isinstance(x, collections.abc.Mapping):
        return frozendict({k: make_hashable(v) for k, v in x.items()})
    else:
        raise TypeError("Don't know how to make {} objects hashable".format(type(x).__name__))

If you want to support more types, use functools.singledispatch (Python 3.7):

@functools.singledispatch
def make_hashable(x):
    raise TypeError("Don't know how to make {} objects hashable".format(type(x).__name__))

@make_hashable.register
def _(x: collections.abc.Hashable):
    return x

@make_hashable.register
def _(x: collections.abc.Sequence):
    return tuple(make_hashable(xi) for xi in x)

@make_hashable.register
def _(x: collections.abc.Set):
    return frozenset(make_hashable(xi) for xi in x)

@make_hashable.register
def _(x: collections.abc.Mapping):
    return frozendict({k: make_hashable(v) for k, v in x.items()})

# add your own types here
Eric
  • 87,154
  • 48
  • 211
  • 332
  • This doesn't work, for example, for a `dict` of `DataFrame` objects. – James Hirschorn Mar 12 '19 at 17:48
  • @JamesHirschorn: Updated to fail loudly – Eric Mar 13 '19 at 00:40
  • Better! I added the following `elif` clause to make it work with `DataFrame`s: `elif isinstance(x, pd.DataFrame): return make_hashable(hash_pandas_object(x).tolist())` I will edit the answer and see if you accept it... – James Hirschorn Mar 15 '19 at 17:54
  • Note that `DataFrame` is an instance of `Hashable` since it has a `__hash__` function, even though the function returns an error message. – James Hirschorn Mar 18 '19 at 16:29
  • I rolled back your change, and provided an alternative that's more extensible – Eric Mar 18 '19 at 17:05
  • Unfortunately, I have found that this approach fails. For example, `` returns a different hash value every time the python script runs. – James Hirschorn Mar 19 '19 at 23:52
  • 1
    OK. I see I was asking for something more than "hashable", which only guarantees that objects that are equal shall have the same hash. I am working on a version that will give the the same value between runs, and independent of python version, etc.. – James Hirschorn Mar 20 '19 at 00:49
  • 1
    `hash` randomization is deliberate security feature enabled by default in python 3.7. – Eric Mar 20 '19 at 01:51
3

You can use the maps library to do this. Specifically, maps.FrozenMap

import maps
fm = maps.FrozenMap(my_dict)
hash(fm)

To install maps, just do:

pip install maps

It handles the nested dict case too:

import maps
fm = maps.FrozenMap.recurse(my_dict)
hash(fm)

Disclaimer: I am the author of the maps library.

Pedro Cattori
  • 2,345
  • 1
  • 19
  • 39
  • The library doesn't sort the list inside a dict. And hence this could produce different hash. There should be an option to sort a list too. A frozenset should help, but I wonder how would you handle the case with a nested dict containing a list of dicts. As dict are unhashable. – Suraj Aug 13 '19 at 15:04
  • 1
    @Suraj : it *does* handle nested structure via `.recurse`. See https://maps.readthedocs.io/en/latest/api.html#maps.FrozenMap.recurse . Ordering in lists is semantically meaningful, if you want order independence you can convert your lists to sets prior to calling `.recurse`. You can also make use of the `list_fn` parameter to `.recurse` to use a different hashable data structure than `tuple` (.e.g `frozenset`) – Pedro Cattori Nov 17 '19 at 14:26
1

Answers from this thread with the highest upvotes didn't work for me as their hashing functions give different results on different machines due to PYTHOPYTHONHASHSEED.

I adjusted all the hints from this thread and came up with a solution that works for me.

import collections
import hashlib
import json


def simplify_object(o):
    if isinstance(o, dict):
        ordered_dict = collections.OrderedDict(sorted(o.items()))
        for k, v in ordered_dict.items():
            v = simplify_object(v)
            ordered_dict[str(k)] = v
        o = ordered_dict
    elif isinstance(o, (list, tuple, set)):
        o = [simplify_object(el) for el in o]
    else:
        o = str(o).strip()
    return o


def make_hash(o):
    o = simplify_object(o)
    bytes_val = json.dumps(o, sort_keys=True, ensure_ascii=True, default=str)
    hash_val = hashlib.sha1(bytes_val.encode()).hexdigest()
    return hash_val
Tom Wojcik
  • 3,362
  • 3
  • 19
  • 31
0

One way to approach the problem is to make a tuple of the dictionary's items:

hash(tuple(my_dict.items()))
Anonymous
  • 715
  • 2
  • 14
  • 29
0

MD5 HASH

The method which resulted in the most stable results for me was using md5 hashes and json.stringify

from typing import Dict, Any
import hashlib
import json

def dict_hash(dictionary: Dict[str, Any]) -> str:
    """MD5 hash of a dictionary."""
    dhash = hashlib.md5()
    # We need to sort arguments so {'a': 1, 'b': 2} is
    # the same as {'b': 2, 'a': 1}
    encoded = json.dumps(dictionary, sort_keys=True).encode()
    dhash.update(encoded)
    return dhash.hexdigest()
Shay Dahan
  • 11
  • 2
-6

I do it like this:

hash(str(my_dict))
garbanzio
  • 796
  • 1
  • 6
  • 10