162

Is there a way to delete duplicate lines in a file in Unix?

I can do it with sort -u and uniq commands, but I want to use sed or awk. Is that possible?

Márton Tamás
  • 2,378
  • 1
  • 13
  • 17
Vijay
  • 59,537
  • 86
  • 209
  • 308

9 Answers9

324
awk '!seen[$0]++' file.txt

seen is an associative-array that Awk will pass every line of the file to. If a line isn't in the array then seen[$0] will evaluate to false. The ! is the logical NOT operator and will invert the false to true. Awk will print the lines where the expression evaluates to true. The ++ increments seen so that seen[$0] == 1 after the first time a line is found and then seen[$0] == 2, and so on.
Awk evaluates everything but 0 and "" (empty string) to true. If a duplicate line is placed in seen then !seen[$0] will evaluate to false and the line will not be written to the output.

yolenoyer
  • 7,117
  • 1
  • 20
  • 47
Jonas Elfström
  • 28,718
  • 6
  • 66
  • 102
  • 6
    To save it in a file we can do this ```awk '!seen[$0]++' merge_all.txt > output.txt ``` – Akash Kandpal Jul 19 '18 at 11:42
  • 7
    An important caveat here: if you need to do this for multiple files, and you tack more files on the end of the command, or use a wildcard… the 'seen' array will fill up with duplicate lines from ALL the files. If you instead want to treat each file independently, you'll need to do something like `for f in *.txt; do gawk -i inplace '!seen[$0]++' "$f"; done` – Nick K9 Jan 17 '19 at 18:05
  • 1
    @NickK9 that de-duping cumulatively across multiple files is awesome in itself. Nice tip – sfscs Jan 14 '20 at 21:05
  • It also works thanks to the fact that the result of '++' operator is not the value after increment, but the previous value. – honzajde Nov 11 '20 at 11:29
34

From http://sed.sourceforge.net/sed1line.txt: (Please don't ask me how this works ;-) )

 # delete duplicate, consecutive lines from a file (emulates "uniq").
 # First line in a set of duplicate lines is kept, rest are deleted.
 sed '$!N; /^\(.*\)\n\1$/!P; D'

 # delete duplicate, nonconsecutive lines from a file. Beware not to
 # overflow the buffer size of the hold space, or else use GNU sed.
 sed -n 'G; s/\n/&&/; /^\([ -~]*\n\).*\n\1/d; s/\n//; h; P'
Andre Miller
  • 14,161
  • 6
  • 49
  • 52
  • geekery;-) +1, but resource consumption is inavoidable. – Michael Krelin - hacker Sep 18 '09 at 13:16
  • 3
    '$!N; /^\(.*\)\n\1$/!P; D' means "If you're not at the last line, read in another line. Now look at what you have and if it ISN'T stuff followed by a newline and then the same stuff again, print out the stuff. Now delete the stuff (up to the newline)." – Beta Sep 18 '09 at 15:30
  • 2
    'G; s/\n/&&/; /^\([ -~]*\n\).*\n\1/d; s/\n//; h; P' means, roughly, "Append the whole hold space this line, then if you see a duplicated line throw the whole thing out, otherwise copy the whole mess back into the hold space and print the first part (which is the line you just read." – Beta Sep 18 '09 at 15:41
  • Is the `$!` part necessary? Doesn't `sed 'N; /^\(.*\)\n\1$/!P; D'` do the same thing? I can't come up with an example where the two are different on my machine (fwiw I did try an empty line at the end with both versions and they were both fine). – eddi Jul 24 '12 at 16:16
  • The second solution doesn't work for me (on GNU sed 4.2.1), on a test file with only lowercase English letters and spaces. However, replacing `[ -~]` with `.` or `[^\n]` or even `[ -z{|}~]` (the exact same set of characters) does the job. If anyone can explain the difference, that would be nice... – amichair Feb 22 '13 at 10:09
  • 1
    Almost 7 years later and no one answered @amichair ... makes me sad. ;) Anyways, `[ -~]` represents a range of ASCII characters from 0x20 (space) to 0x7E (tilde). These are considered [the _printable_ ASCII characters](https://www.ascii-code.com/) (linked page also has 0x7F/delete but that doesn't seem right). That makes the solution broken for anyone not using ASCII or anyone using, say, tab characters.. The more portable `[^\n]` includes a whole lot more characters...all of 'em except one, in fact. – B Layer Dec 14 '19 at 07:15
  • Thanks for caring, @BLayer :-) I think I may have been asking about the second case - `[ -z{|}~]` and `[ -~]` seem to select the same range of ASCII characters, yet one worked and the other did not... – amichair Dec 19 '19 at 20:53
  • @amichair You'll never walk aloooone. :D Alas, I think I mistakenly read "space" as "whitespace" and assumed you had a Tab somewhere in there. Maybe it was a bug in sed. Can you still reproduce? I can't with gnu sed 4.4. Only other thing that comes to mind is `[..]` ranges being non-portable across different locales (i.e. LC_COLLATE, fixed by setting `LC_ALL=C`) but that seems like a stretch esp. since it sounds like you know what you're doing. Sorry for raising false hopes. ;) – B Layer Dec 19 '19 at 23:01
  • @BLayer Nope, on GNU sed 4.4 on Ubuntu 18.04 `[ -~]` works for me but `[ -z{|}~]` does not in the second command (non-consecutive lines, e.g. pipe `echo -e "1\n2\n3\n1\n4\n3\n"` into the command). – amichair Dec 21 '19 at 21:15
18

Perl one-liner similar to @jonas's awk solution:

perl -ne 'print if ! $x{$_}++' file

This variation removes trailing whitespace before comparing:

perl -lne 's/\s*$//; print if ! $x{$_}++' file

This variation edits the file in-place:

perl -i -ne 'print if ! $x{$_}++' file

This variation edits the file in-place, and makes a backup file.bak

perl -i.bak -ne 'print if ! $x{$_}++' file
Chris Koknat
  • 2,636
  • 2
  • 25
  • 27
7

An alternative way using Vim(Vi compatible):

Delete duplicate, consecutive lines from a file:

vim -esu NONE +'g/\v^(.*)\n\1$/d' +wq

Delete duplicate, nonconsecutive and nonempty lines from a file:

vim -esu NONE +'g/\v^(.+)$\_.{-}^\1$/d' +wq

Bohr
  • 1,366
  • 14
  • 19
6

The one-liner that Andre Miller posted above works except for recent versions of sed when the input file ends with a blank line and no chars. On my Mac my CPU just spins.

Infinite loop if last line is blank and has no chars:

sed '$!N; /^\(.*\)\n\1$/!P; D'

Doesn't hang, but you lose the last line

sed '$d;N; /^\(.*\)\n\1$/!P; D'

The explanation is at the very end of the sed FAQ:

The GNU sed maintainer felt that despite the portability problems
this would cause, changing the N command to print (rather than
delete) the pattern space was more consistent with one's intuitions
about how a command to "append the Next line" ought to behave.
Another fact favoring the change was that "{N;command;}" will
delete the last line if the file has an odd number of lines, but
print the last line if the file has an even number of lines.

To convert scripts which used the former behavior of N (deleting
the pattern space upon reaching the EOF) to scripts compatible with
all versions of sed, change a lone "N;" to "$d;N;".

Bradley Kreider
  • 1,093
  • 10
  • 16
4

The first solution is also from http://sed.sourceforge.net/sed1line.txt

$ echo -e '1\n2\n2\n3\n3\n3\n4\n4\n4\n4\n5' |sed -nr '$!N;/^(.*)\n\1$/!P;D'
1
2
3
4
5

the core idea is:

print ONLY once of each duplicate consecutive lines at its LAST appearance and use D command to implement LOOP.

Explains:

  1. $!N;: if current line is NOT the last line, use N command to read the next line into pattern space.
  2. /^(.*)\n\1$/!P: if the contents of current pattern space is two duplicate string separated by \n, which means the next line is the same with current line, we can NOT print it according to our core idea; otherwise, which means current line is the LAST appearance of all of its duplicate consecutive lines, we can now use P command to print the chars in current pattern space util \n (\n also printed).
  3. D: we use D command to delete the chars in current pattern space util \n (\n also deleted), then the content of pattern space is the next line.
  4. and D command will force sed to jump to its FIRST command $!N, but NOT read the next line from file or standard input stream.

The second solution is easy to understood (from myself):

$ echo -e '1\n2\n2\n3\n3\n3\n4\n4\n4\n4\n5' |sed -nr 'p;:loop;$!N;s/^(.*)\n\1$/\1/;tloop;D'
1
2
3
4
5

the core idea is:

print ONLY once of each duplicate consecutive lines at its FIRST appearance and use : command & t command to implement LOOP.

Explains:

  1. read a new line from input stream or file and print it once.
  2. use :loop command set a label named loop.
  3. use N to read next line into the pattern space.
  4. use s/^(.*)\n\1$/\1/ to delete current line if the next line is same with current line, we use s command to do the delete action.
  5. if the s command is executed successfully, then use tloop command force sed to jump to the label named loop, which will do the same loop to the next lines util there are no duplicate consecutive lines of the line which is latest printed; otherwise, use D command to delete the line which is the same with thelatest-printed line, and force sed to jump to first command, which is the p command, the content of current pattern space is the next new line.
Weike
  • 1,052
  • 8
  • 12
  • same command on Windows with busybox: `busybox echo -e "1\n2\n2\n3\n3\n3\n4\n4\n4\n4\n5" | busybox sed -nr "$!N;/^(.*)\n\1$/!P;D"` – scavenger Feb 24 '20 at 02:21
2

This can be achieved using awk
Below Line will display unique Values

awk file_name | uniq

You can output these unique values to a new file

awk file_name | uniq > uniq_file_name

new file uniq_file_name will contain only Unique values, no duplicates

2

uniq would be fooled by trailing spaces and tabs. In order to emulate how a human makes comparison, I am trimming all trailing spaces and tabs before comparison.

I think that the $!N; needs curly braces or else it continues, and that is the cause of infinite loop.

I have bash 5.0 and sed 4.7 in Ubuntu 20.10. The second one-liner did not work, at the character set match.

Three variations, first to eliminate adjacent repeat lines, second to eliminate repeat lines wherever they occur, third to eliminate all but the last instance of lines in file.

pastebin

# First line in a set of duplicate lines is kept, rest are deleted.
# Emulate human eyes on trailing spaces and tabs by trimming those.
# Use after norepeat() to dedupe blank lines.

dedupe() {
 sed -E '
  $!{
   N;
   s/[ \t]+$//;
   /^(.*)\n\1$/!P;
   D;
  }
 ';
}

# Delete duplicate, nonconsecutive lines from a file. Ignore blank
# lines. Trailing spaces and tabs are trimmed to humanize comparisons
# squeeze blank lines to one

norepeat() {
 sed -n -E '
  s/[ \t]+$//;
  G;
  /^(\n){2,}/d;
  /^([^\n]+).*\n\1(\n|$)/d;
  h;
  P;
  ';
}

lastrepeat() {
 sed -n -E '
  s/[ \t]+$//;
  /^$/{
   H;
   d;
  };
  G;
  # delete previous repeated line if found
  s/^([^\n]+)(.*)(\n\1(\n.*|$))/\1\2\4/;
  # after searching for previous repeat, move tested last line to end
  s/^([^\n]+)(\n)(.*)/\3\2\1/;
  $!{
   h;
   d;
  };
  # squeeze blank lines to one
  s/(\n){3,}/\n\n/g;
  s/^\n//;
  p;
 ';
}
BobDodds
  • 23
  • 3
-4
cat filename | sort | uniq -c | awk -F" " '$1<2 {print $2}'

Deletes the duplicate lines using awk.

Sadhun
  • 224
  • 4
  • 11