8

Are there any tools to do a URL compare in Python?

For example, if I have http://google.com and google.com/ I'd like to know that they are likely to be the same site.

If I were to construct a rule manually, I might Uppercase it, then strip off the http:// portion, and drop anything after the last alpha-numeric character.. But I can see failures of this, as I'm sure you can as well.

Is there a library that does this? How would you do it?

Colin Davis
  • 572
  • 1
  • 6
  • 20
  • It won't let you post two _links_ without X reputation, but you can include as many URLs as you want if you put them in backquotes so that the parser doesn't convert them to links. I edited your question to show what I think you meant, but if I got it wrong please do edit it again to correct me. – David Z Jul 19 '10 at 21:44
  • Oh, and another thing: what exactly do you mean by "fuzzy" comparison? It's easy to tell that `http://google.com` and `google.com/` are the same thing because they have the exact same canonical form, but that's not fuzzy comparison. A real fuzzy comparison would identify URLs that are similar, but not identical, even after you convert them to a standard form. – David Z Jul 19 '10 at 21:47
  • Thanks, still very new to SO. I've changed the title. – Colin Davis Jul 19 '10 at 22:17
  • http://intertwingly.net/stories/2004/08/04/urlnorm.py seems like it might be a good starting place. – Colin Davis Jul 19 '10 at 22:18

4 Answers4

3

This off the top of my head:

def canonical_url(u):
    u = u.lower()
    if u.startswith("http://"):
        u = u[7:]
    if u.startswith("www."):
        u = u[4:]
    if u.endswith("/"):
        u = u[:-1]
    return u

def same_urls(u1, u2):
    return canonical_url(u1) == canonical_url(u2)

Obviously, there's lots of room for more fiddling with this. Regexes might be better than startswith and endswith, but you get the idea.

Ned Batchelder
  • 323,515
  • 67
  • 518
  • 625
  • That's similar to what I'd build if I was going to do it manually. I was hoping there was a lib that already does this. It seems like it should be a solved-problem. – Colin Davis Jul 19 '10 at 22:18
  • @Colin: This is one of those things where doing it yourself is usually easy enough, and more likely to get you what you really want. The thing is that there is no strictly defined 'canonical form' of a URL, so everyone who wants it is thinking something slightly different. – Nicholas Knight Jul 20 '10 at 01:15
  • I agree with Nicholas: this isn't well-defined enough to have gotten a standard definition. You'll be best served by writing it yourself. – Ned Batchelder Jul 20 '10 at 02:43
  • lower-casing the whole URL strikes me as a bad idea-- case does matter in URL's (aside from the host and domain) – Ross M Karchner Apr 23 '11 at 13:43
2

You could look up the names using dns and see if they point to the same ip. Some minor string processing may be required to remove confusing chars.

from socket import gethostbyname_ex

urls = ['http://google.com','google.com/','www.google.com/','news.google.com']

data = []
for orginalName in urls:
    print 'url:',orginalName
    name = orginalName.strip()
    name = name.replace( 'http://','')
    name = name.replace( 'http:','')
    if name.find('/') > 0:
        name = name[:name.find('/')]
    if name.find('\\') > 0:
        name = name[:name.find('\\')]
    print 'dns lookup:', name
    if name:
        try:
            result = gethostbyname_ex(name)
        except:
            continue # Unable to resolve
        for ip in result[2]:
            print 'ip:', ip
            data.append( (ip, orginalName) )

print data

result:

url: http://google.com
dns lookup: google.com
ip: 66.102.11.104
url: google.com/
dns lookup: google.com
ip: 66.102.11.104
url: www.google.com/
dns lookup: www.google.com
ip: 66.102.11.104
url: news.google.com
dns lookup: news.google.com
ip: 66.102.11.104
[('66.102.11.104', 'http://google.com'), ('66.102.11.104', 'google.com/'), ('66.102.11.104', 'www.google.com/'), ('66.102.11.104', 'news.google.com')]
Martlark
  • 12,242
  • 12
  • 73
  • 89
1

There is quite a bit to creating a canonical url apparently. The url-normalize library is best that I have tested.

Depending on the source of your urls you may wish to clean them of other standard parameters such as UTM codes. w3lib.url.url_query_cleaner is useful for this.

Combining this with Ned Batchelder's answer could look something like:

Code:

from w3lib.url import url_query_cleaner
from url_normalize import url_normalize

urls = ['google.com',
'google.com/',
'http://google.com/',
'http://google.com',
'http://google.com?',
'http://google.com/?',
'http://google.com//',
'http://google.com?utm_source=Google']


def canonical_url(u):
    u = url_normalize(u)
    u = url_query_cleaner(u,parameterlist = ['utm_source','utm_medium','utm_campaign','utm_term','utm_content'],remove=True)

    if u.startswith("http://"):
        u = u[7:]
    if u.startswith("https://"):
        u = u[8:]
    if u.startswith("www."):
        u = u[4:]
    if u.endswith("/"):
        u = u[:-1]
    return u

list(map(canonical_url,urls))

Result:

['google.com',
 'google.com',
 'google.com',
 'google.com',
 'google.com',
 'google.com',
 'google.com',
 'google.com']
Antony
  • 140
  • 1
  • 8
-1

It's not 'fuzzy', it just find the 'distance' between two strings:

http://pypi.python.org/pypi/python-Levenshtein/

I would remove all portions which are semantically meaningful to URL parsing (protocol, slashes, etc.), normalize to lowercase, then perform a levenstein distance, then from there decide how many difference is an acceptable threshold.

Just an idea.

R. Hill
  • 3,487
  • 1
  • 17
  • 19