65

Assume I have a list of words, and I want to find the number of times each word appears in that list.

An obvious way to do this is:

words = "apple banana apple strawberry banana lemon"
uniques = set(words.split())
freqs = [(item, words.split().count(item)) for item in uniques]
print(freqs)

But I find this code not very good, because the program runs through the word list twice, once to build the set, and a second time to count the number of appearances.

Of course, I could write a function to run through the list and do the counting, but that wouldn't be so Pythonic. So, is there a more efficient and Pythonic way?

glhr
  • 3,982
  • 1
  • 13
  • 24
Daniyar
  • 1,520
  • 2
  • 14
  • 23

11 Answers11

144

The Counter class in the collections module is purpose built to solve this type of problem:

from collections import Counter
words = "apple banana apple strawberry banana lemon"
Counter(words.split())
# Counter({'apple': 2, 'banana': 2, 'strawberry': 1, 'lemon': 1})
Boris
  • 7,044
  • 6
  • 62
  • 63
sykora
  • 81,994
  • 11
  • 60
  • 70
  • According to http://stackoverflow.com/a/20308657/2534876, this is fastest on Python3 but slow on Python2. – JDong Dec 31 '14 at 05:34
  • do you know if there is a flag to convert this to a percentage freq_dict? E.g., `'apple' : .3333 (2/6),` – Tommy Sep 23 '15 at 13:30
  • @Tommy `total = sum(your_counter_object.values())` then `freq_percentage = {k: v/total for k, v in your_counter_object.items()}` – Boris Apr 25 '19 at 03:00
95

defaultdict to the rescue!

from collections import defaultdict

words = "apple banana apple strawberry banana lemon"

d = defaultdict(int)
for word in words.split():
    d[word] += 1

This runs in O(n).

Triptych
  • 188,472
  • 32
  • 145
  • 168
11

Standard approach:

from collections import defaultdict

words = "apple banana apple strawberry banana lemon"
words = words.split()
result = defaultdict(int)
for word in words:
    result[word] += 1

print result

Groupby oneliner:

from itertools import groupby

words = "apple banana apple strawberry banana lemon"
words = words.split()

result = dict((key, len(list(group))) for key, group in groupby(sorted(words)))
print result
Community
  • 1
  • 1
nosklo
  • 193,422
  • 54
  • 273
  • 281
11
freqs = {}
for word in words:
    freqs[word] = freqs.get(word, 0) + 1 # fetch and increment OR initialize

I think this results to the same as Triptych's solution, but without importing collections. Also a bit like Selinap's solution, but more readable imho. Almost identical to Thomas Weigel's solution, but without using Exceptions.

This could be slower than using defaultdict() from the collections library however. Since the value is fetched, incremented and then assigned again. Instead of just incremented. However using += might do just the same internally.

hopla
  • 3,072
  • 4
  • 26
  • 26
7

If you don't want to use the standard dictionary method (looping through the list incrementing the proper dict. key), you can try this:

>>> from itertools import groupby
>>> myList = words.split() # ['apple', 'banana', 'apple', 'strawberry', 'banana', 'lemon']
>>> [(k, len(list(g))) for k, g in groupby(sorted(myList))]
[('apple', 2), ('banana', 2), ('lemon', 1), ('strawberry', 1)]

It runs in O(n log n) time.

Nick Presta
  • 26,924
  • 6
  • 51
  • 73
3

Without defaultdict:

words = "apple banana apple strawberry banana lemon"
my_count = {}
for word in words.split():
    try: my_count[word] += 1
    except KeyError: my_count[word] = 1
tzot
  • 81,264
  • 25
  • 129
  • 197
Thomas Weigel
  • 149
  • 1
  • 2
  • Seems slower than defaultdict in my tests – nosklo May 21 '09 at 16:59
  • splitting by a space is redundant. Also, you should use the dict.set_default method instead of the try/except. – Triptych May 21 '09 at 17:05
  • 2
    It's a lot slower because you are using Exceptions. Exceptions are very costly in almost any language. Avoid using them for logic branches. Look at my solution for an almost identical method, but without using Exceptions: http://stackoverflow.com/questions/893417/item-frequency-count-in-python/983434#983434 – hopla Jun 11 '09 at 20:30
1
words = "apple banana apple strawberry banana lemon"
w=words.split()
e=list(set(w))       
word_freqs = {}
for i in e:
    word_freqs[i]=w.count(i)
print(word_freqs)   

Hope this helps!

user2922935
  • 429
  • 4
  • 12
0

I happened to work on some Spark exercise, here is my solution.

tokens = ['quick', 'brown', 'fox', 'jumps', 'lazy', 'dog']

print {n: float(tokens.count(n))/float(len(tokens)) for n in tokens}

**#output of the above **

{'brown': 0.16666666666666666, 'lazy': 0.16666666666666666, 'jumps': 0.16666666666666666, 'fox': 0.16666666666666666, 'dog': 0.16666666666666666, 'quick': 0.16666666666666666}
Jaffer Wilson
  • 6,237
  • 7
  • 46
  • 100
0

Use reduce() to convert the list to a single dict.

words = "apple banana apple strawberry banana lemon"
reduce( lambda d, c: d.update([(c, d.get(c,0)+1)]) or d, words.split(), {})

returns

{'strawberry': 1, 'lemon': 1, 'apple': 2, 'banana': 2}
Gadi
  • 1,102
  • 8
  • 6
0

Can't you just use count?

words = 'the quick brown fox jumps over the lazy gray dog'
words.count('z')
#output: 1
Antonio
  • 11
-1

The answer below takes some extra cycles, but it is another method

def func(tup):
    return tup[-1]


def print_words(filename):
    f = open("small.txt",'r')
    whole_content = (f.read()).lower()
    print whole_content
    list_content = whole_content.split()
    dict = {}
    for one_word in list_content:
        dict[one_word] = 0
    for one_word in list_content:
        dict[one_word] += 1
    print dict.items()
    print sorted(dict.items(),key=func)
Jesse
  • 8,035
  • 7
  • 42
  • 56
Prabhu S
  • 24
  • 3