61

I don't care what the differences are. I just want to know whether the contents are different.

Mr_and_Mrs_D
  • 27,070
  • 30
  • 156
  • 325
Corey Trager
  • 21,253
  • 16
  • 78
  • 121

8 Answers8

79

The low level way:

from __future__ import with_statement
with open(filename1) as f1:
   with open(filename2) as f2:
      if f1.read() == f2.read():
         ...

The high level way:

import filecmp
if filecmp.cmp(filename1, filename2, shallow=False):
   ...
tzot
  • 81,264
  • 25
  • 129
  • 197
Federico A. Ramponi
  • 43,319
  • 29
  • 102
  • 130
  • 12
    I corrected your filecmp.cmp call, because without a non-true shallow argument, it doesn't do what the question asks for. – tzot Oct 31 '08 at 23:11
  • 2
    You're right. http://www.python.org/doc/2.5.2/lib/module-filecmp.html . Thank you very much. – Federico A. Ramponi Nov 01 '08 at 03:21
  • 1
    btw, one should open the files in binary mode to be sure, since the files can differ in line separators. – newtover Apr 29 '13 at 10:30
  • 7
    This can have problems if the files are huge. You can possibly save some effort by the computer if the first thing you do is compare file sizes. If the sizes are different, obviously the files are different. You only need to read the files if the sizes are the same. – Bryan Oakley Apr 29 '13 at 10:44
  • 4
    I just found out that `filecmp.cmp()` also compares metadata as well, such as inode number and ctime and other stats. This was undesirable in my use-case. If you just want to compare contents without comparing metadata, `f1.read() == f2.read()` is probably a better way. – Ray Sep 04 '18 at 11:06
  • Is this working when files are the same but the order is changed? – Alex Kolydas Aug 05 '19 at 14:28
  • @tzot `shallow=True` will still compare the files byte-by-byte in most cases, right? It only skips the check if the files are hardlinks to the same inode, which is a perfectly safe thing to skip? – endolith Feb 27 '21 at 16:08
  • @Ray Why is comparing metadata undesirable? – endolith Feb 27 '21 at 16:10
  • I just realized that `filecmp.cmp` follows links, so it will say that `file.txt` and `Link to file.txt` are equal. I guess `cmp` does this too, though? – endolith Feb 27 '21 at 16:52
  • @endolith just do an `import filecmp; help(filecmp.cmp)` in a Python console. `shallow=True` won't compare contents, only metadata. – tzot Mar 02 '21 at 17:24
  • @tzot It compares contents except in the rare case that the metadata matches exactly (which means the two files are hardlinks) – endolith Mar 03 '21 at 05:53
29

If you're going for even basic efficiency, you probably want to check the file size first:

if os.path.getsize(filename1) == os.path.getsize(filename2):
  if open('filename1','r').read() == open('filename2','r').read():
    # Files are the same.

This saves you reading every line of two files that aren't even the same size, and thus can't be the same.

(Even further than that, you could call out to a fast MD5sum of each file and compare those, but that's not "in Python", so I'll stop here.)

Rich
  • 2,763
  • 1
  • 18
  • 19
  • 3
    The md5sum approach will be slower with just 2 files (You still need to read the file to compute the sum) It only pays off when you're looking for duplicates among several files. – Brian Oct 31 '08 at 18:15
  • @Brian: you're assuming that md5sum's file reading is no faster than Python's, and that there's no overhead from reading the entire file into the Python environment as a string! Try this with 2GB files... – Rich Oct 31 '08 at 18:17
  • 3
    There's no reason to expect md5sum's file reading would be faster than python's - IO is pretty independant of language. The large file problem is a reason to iterate in chunks (or use filecmp), not to use md5 where you're needlessly paying an extra CPU penalty. – Brian Nov 01 '08 at 00:13
  • 5
    This is especially true when you consider the case when the files are not identical. Comparing by blocks can bail out early, but md5sum must carry on reading the entire file. – Brian Nov 01 '08 at 00:29
12

This is a functional-style file comparison function. It returns instantly False if the files have different sizes; otherwise, it reads in 4KiB block sizes and returns False instantly upon the first difference:

from __future__ import with_statement
import os
import itertools, functools, operator
try:
    izip= itertools.izip  # Python 2
except AttributeError:
    izip= zip  # Python 3

def filecmp(filename1, filename2):
    "Do the two files have exactly the same contents?"
    with open(filename1, "rb") as fp1, open(filename2, "rb") as fp2:
        if os.fstat(fp1.fileno()).st_size != os.fstat(fp2.fileno()).st_size:
            return False # different sizes ∴ not equal

        # set up one 4k-reader for each file
        fp1_reader= functools.partial(fp1.read, 4096)
        fp2_reader= functools.partial(fp2.read, 4096)

        # pair each 4k-chunk from the two readers while they do not return '' (EOF)
        cmp_pairs= izip(iter(fp1_reader, b''), iter(fp2_reader, b''))

        # return True for all pairs that are not equal
        inequalities= itertools.starmap(operator.ne, cmp_pairs)

        # voilà; any() stops at first True value
        return not any(inequalities)

if __name__ == "__main__":
    import sys
    print filecmp(sys.argv[1], sys.argv[2])

Just a different take :)

tzot
  • 81,264
  • 25
  • 129
  • 197
6

Since I can't comment on the answers of others I'll write my own.

If you use md5 you definitely must not just md5.update(f.read()) since you'll use too much memory.

def get_file_md5(f, chunk_size=8192):
    h = hashlib.md5()
    while True:
        chunk = f.read(chunk_size)
        if not chunk:
            break
        h.update(chunk)
    return h.hexdigest()
user32141
  • 246
  • 1
  • 4
  • 1
    I believe that any hashing operation is overkill for this question's purposes; direct piece-by-piece comparison is faster and more straight. – tzot Oct 31 '08 at 23:18
  • I was just clearing up the actual hashing part someone suggested. – user32141 Nov 01 '08 at 00:44
  • +1 I like your version better. Also, I don't think using a hash is overkill. There's really no good reason not to if all you want to know is whether or not they're different. – Jeremy Cantrell Nov 01 '08 at 07:27
  • 3
    @Jeremy Cantrell: one computes hashes when they are to be cached/stored, or compared to cached/stored ones. Otherwise, just compare strings. Whatever the hardware, str1 != str2 is faster than md5.new(str1).digest() != md5.new(str2).digest(). Hashes also have collisions (unlikely but not impossible). – tzot Nov 01 '08 at 16:00
2

f = open(filename1, "r").read()
f2 = open(filename2,"r").read()
print f == f2


mmattax
  • 25,441
  • 39
  • 110
  • 143
  • 7
    “Well, I have this 8 GiB file and that 32 GiB file that I want to compare…” – tzot Oct 31 '08 at 23:19
  • 2
    This is not a good way to do this. A big issue is the files are never closed after opening. Less critically, there is no optimization, for example a file size comparison, before opening and reading the files.. – kchawla-pi Mar 18 '19 at 10:14
2

For larger files you could compute a MD5 or SHA hash of the files.

ConcernedOfTunbridgeWells
  • 59,622
  • 15
  • 138
  • 193
  • 4
    So what about two 32 GiB files differing in the first byte only? Why spend CPU time and wait too long for an answer? – tzot Oct 31 '08 at 23:15
2

I would use a hash of the file's contents using MD5.

import hashlib

def checksum(f):
    md5 = hashlib.md5()
    md5.update(open(f).read())
    return md5.hexdigest()

def is_contents_same(f1, f2):
    return checksum(f1) == checksum(f2)

if not is_contents_same('foo.txt', 'bar.txt'):
    print 'The contents are not the same!'
Jeremy Cantrell
  • 23,209
  • 13
  • 51
  • 78
1
from __future__ import with_statement

filename1 = "G:\\test1.TXT"

filename2 = "G:\\test2.TXT"


with open(filename1) as f1:

   with open(filename2) as f2:

      file1list = f1.read().splitlines()

      file2list = f2.read().splitlines()

      list1length = len(file1list)

      list2length = len(file2list)

      if list1length == list2length:

          for index in range(len(file1list)):

              if file1list[index] == file2list[index]:

                   print file1list[index] + "==" + file2list[index]

              else:                  

                   print file1list[index] + "!=" + file2list[index]+" Not-Equel"

      else:

          print "difference inthe size of the file and number of lines"
lrnzcig
  • 3,374
  • 4
  • 33
  • 44