6

I'm trying to do a comparison of strings in Python. My strings contain titles which can be structured a number of different ways:

'Title'
'Title: Subtitle'
'Title - Subtitle'
'Title, Subtitle'
'Title Subtitle'

Is it possible to do similarity comparison in Python so that it can determine that match('Title: Subtitle', 'Title - Subtitle') = True? (or however it would be constructed)

Basically I'm trying to determine if they're the same title even if the splitting is different.

if 'Title: Subtitle' == 'Title - Subtitle':
    match = 'True'
else:
    match = 'False'

There are also some that might be stored as The Title: The Subtitle or Title, The: Subtitle, The although I think that may add a bit of complexity I could probably get around by reconstructing the string.

Midavalo
  • 387
  • 1
  • 13
  • 25

8 Answers8

13

What you're trying to do has already been implemented very well in the jellyfish package.

>>> import jellyfish
>>> jellyfish.levenshtein_distance('jellyfish', 'smellyfish')
2
DevShark
  • 7,370
  • 7
  • 28
  • 52
7

The standard library's difflib module provides a function get_close_matches which does fuzzy string matching.

>>> import difflib
>>> difflib.get_close_matches('python', ['snakes', 'thon.py', 'pythin'])
['pythin', 'thon.py']  # ordered by similarity score
Todd Owen
  • 14,034
  • 7
  • 46
  • 50
  • difflib is using Ratcliff/Obershelp https://xlinux.nist.gov/dads/HTML/ratcliffObershelp.html which may not be as good as Levenshtein in some cases – wordsforthewise Nov 27 '17 at 23:28
3

You can use in keyword. It isn't a similarity comparison, but does what you want:

s = "Title: Subtitle"

if "Title" in s or "Subtitle" in s:
    match = 'True'
else:
    match = 'False'
xdola
  • 547
  • 1
  • 8
  • 21
2

Try replacing the characters and then checking the equality:

def match(str1, str2):
    str1 = str1.replace(' -', '').replace(',', '').replace(':', '')
    str2 = str2.replace(' -', '').replace(',', '').replace(':', '')
    return str1 == str2

>>> match('Title: Subtitle', 'Title - Subtitle')
True
>>> match('Title: Subtitle', 'Title, Subtitle')
True
>>> 
A.J. Uppal
  • 17,608
  • 6
  • 40
  • 70
2

If the only obstacle is punctuation, the problem is trivial: Just discard non-word characters and compare the remaining lists of words.

s1 = 'Title - Subtitle'
toks1 = re.split(r"^\W+", s1)  # keep just the words
toks1 = [ w.lower() for w in toks1 ]

I threw in lowercasing since that could differ too. Apply the same to each input and compare the lists.

But as you point out, there can be other differences. If your data really consists of titles (books, movies, scientific articles), you can start by removing articles and common connectives (so-called "stopwords"), like libraries do. E.g., "The title of the article" gets stripped down to ["title", "article"]. To deal with other possible differences in word order, you could use the so-called "bag of words" approach, common in information retrieval. Convert the list of tokens to a set, or to a dictionary of word counts for cases where some words occur multiple times. Here's an example, using word counts and the nltk's "stopword" list as a filter.

import nltk
from collections import Counter
stopwords = set(nltk.corpus.stopwords.words("english"))

toks1 = [ t for t in toks1 if t not in stopwords ]
cnt1 = Counter(toks1)
cnt2 = Counter(toks2)  # Another title string, processed the same way
if cnt1 == cnt2:
    print("The two strings have exactly the same content words")

If there's still more variation, the sky is the limit. Approximate text matching is a topic of active research with applications in information retrieval, plagiarism detection, genetics, etc. You could check if one title is a subset of the other (maybe someone left out the subtitle). You could try matching by "edit distance" (e.g. the "Levenshtein distance" mentioned by a couple of other answers), applying it either to letters or to whole words. You could try information retrieval algorithms like TF-IDF score. These are just a few of the things you could try, so look for the simplest solution that will do the job for you. Google is your friend.

alexis
  • 43,587
  • 14
  • 86
  • 141
  • Very informative answer, thank you. Obviously this is a lot more complex that I first imagined. – Midavalo Mar 27 '16 at 23:59
  • It is complex, but if you think Levenshtein distance is a good fit for your task you can just leave it at that... – alexis Mar 28 '16 at 18:51
1

I'm a Ruby programmer so no experience with Python, but in Ruby such a problem would quickly be solved by using the gem Levensthein. It calculates the number of edits you need to make at a string to get to the same string.

I see there's a Python equivalent also, so take a look at https://pypi.python.org/pypi/python-Levenshtein

peter
  • 40,314
  • 5
  • 58
  • 99
1

This should work. Python translate can be used to take out any different characters.

titles = ['Title: Sub', 'Title Sub', 'Title - Sub']
s = ': -'

if titles[1].translate(None, s) == titles[2].translate(None, s):
    match = 'True'
else 
    match = 'False'
0

fnmatch.fnmatch can also be handy here though designed for Unix Filename matching, consider the following examples:

>>> from fnmatch import fnmatch
>>> l
['Title: Subtitle', 'Title - Subtitle', 'Title, Subtitle', 'Title Subtitle']
>>>
>>> all(fnmatch(x, 'Title*Subtitle') for x in l)
True

Another method, would be checking if they all match an re pattern:

>>> import re
>>> l
['Title: Subtitle', 'Title - Subtitle', 'Title, Subtitle', 'Title Subtitle']
>>> all(re.search(r'^Title.*?Subtitle$', x) for x in l)
True
Iron Fist
  • 9,767
  • 2
  • 16
  • 31
  • Google "regular expression" and you'll find much better ways to implement this approach. – alexis Mar 27 '16 at 22:25
  • @alexis...Of course I know about `re` solution but I'm just trying to leave it as last resort to not complicate things...anyway...in my intention I was working on an alternative way using `re` which I already post it – Iron Fist Mar 27 '16 at 22:32