5

I was wondering if anyone has logic in java that removes duplicate lines while maintaining the lines order.

I would prefer no regex solution.

MatBanik
  • 24,206
  • 38
  • 107
  • 172

7 Answers7

4
public class UniqueLineReader extends BufferedReader {
    Set<String> lines = new HashSet<String>();

    public UniqueLineReader(Reader arg0) {
        super(arg0);
    }

    @Override
    public String readLine() throws IOException {
        String uniqueLine;
        if (lines.add(uniqueLine = super.readLine()))
            return uniqueLine;
        return "";
    }

  //for testing.. 

    public static void main(String args[]) {
        try {
            // Open the file that is the first
            // command line parameter
            FileInputStream fstream = new FileInputStream(
                    "test.txt");
            UniqueLineReader br = new UniqueLineReader(new InputStreamReader(fstream));
            String strLine;
            // Read File Line By Line
            while ((strLine = br.readLine()) != null) {
                // Print the content on the console
                if (strLine != "")
                    System.out.println(strLine);
            }
            // Close the input stream
            in.close();
        } catch (Exception e) {// Catch exception if any
            System.err.println("Error: " + e.getMessage());
        }
    }

}

Modified Version:

public class UniqueLineReader extends BufferedReader {
    Set<String> lines = new HashSet<String>();

    public UniqueLineReader(Reader arg0) {
        super(arg0);
    }

    @Override
    public String readLine() throws IOException {
        String uniqueLine;
        while (lines.add(uniqueLine = super.readLine()) == false); //read until encountering a unique line
            return uniqueLine;
    }

    public static void main(String args[]) {
        try {
            // Open the file that is the first
            // command line parameter
            FileInputStream fstream = new FileInputStream(
                    "/home/emil/Desktop/ff.txt");
            UniqueLineReader br = new UniqueLineReader(new InputStreamReader(fstream));
            String strLine;
            // Read File Line By Line
            while ((strLine = br.readLine()) != null) {
                // Print the content on the console
                    System.out.println(strLine);
            }
            // Close the input stream
            in.close();
        } catch (Exception e) {// Catch exception if any
            System.err.println("Error: " + e.getMessage());
        }

    }
}
Peter Lawrey
  • 498,481
  • 72
  • 700
  • 1,075
Emil
  • 12,811
  • 17
  • 65
  • 106
2

It can be easy to remove duplicate line from text or File using new java Stream API. Stream support different aggregate feature like sort,distinct and work with different java's existing data structures and their methods. Following example can use to remove duplicate or sort the content in File using Stream API

package removeword;

import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.OpenOption;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Arrays;
import java.util.Scanner;
import java.util.stream.Stream;
import static java.nio.file.StandardOpenOption.*;
import static java.util.stream.Collectors.joining;

public class Java8UniqueWords {

public static void main(String[] args) throws IOException {        
    Path sourcePath = Paths.get("C:/Users/source.txt");
    Path changedPath = Paths.get("C:/Users/removedDouplicate_file.txt");
      try (final Stream<String> lines = Files.lines(sourcePath )
               // .map(line -> line.toLowerCase()) /*optional to use existing string methods*/
                .distinct()
               // .sorted())  /*aggregrate function to sort  disctincted line*/
       {
            final String uniqueWords = lines.collect(joining("\n"));
            System.out.println("Final Output:" + uniqueWords);
            Files.write(changedPath , uniqueWords.getBytes(),WRITE, TRUNCATE_EXISTING);
        }
}
}
Ramgau
  • 198
  • 1
  • 2
  • 14
2

If you feed the lines into a LinkedHashSet, it ignores the repeated ones, since it's a set, but preserves the order, since it's linked. If you just want to know whether you've seena given line before, feed them into a simple Set as you go on, and ignore those which the Set already contains/contained.

entonio
  • 2,001
  • 1
  • 16
  • 25
1

For better/optimum performance, it's wise to use Java 8's API features viz. Streams & Method references with LinkedHashSet for Collection as below:

import java.io.IOException;
import java.io.PrintWriter;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.LinkedHashSet;
import java.util.stream.Collectors;

public class UniqueOperation {

private static PrintWriter pw;  
enter code here
public static void main(String[] args) throws IOException {

    pw = new PrintWriter("abc.txt");

    for(String p : Files.newBufferedReader(Paths.get("C:/Users/as00465129/Desktop/FrontEndUdemyLinks.txt")).
                   lines().
                   collect(Collectors.toCollection(LinkedHashSet::new))) 
        pw.println(p);
    pw.flush();
    pw.close();

    System.out.println("File operation performed successfully");
}
Abhinav
  • 500
  • 8
  • 18
1

Read the text file using a BufferedReader and store it in a LinkedHashSet. Print it back out.

Here's an example:

public class DuplicateRemover {

    public String stripDuplicates(String aHunk) {
        StringBuilder result = new StringBuilder();
        Set<String> uniqueLines = new LinkedHashSet<String>();

        String[] chunks = aHunk.split("\n");
        uniqueLines.addAll(Arrays.asList(chunks));

        for (String chunk : uniqueLines) {
            result.append(chunk).append("\n");
        }

        return result.toString();
    }

}

Here's some unit tests to verify ( ignore my evil copy-paste ;) ):

import org.junit.Test;
import static org.junit.Assert.*;

public class DuplicateRemoverTest {

    @Test
    public void removesDuplicateLines() {
        String input = "a\nb\nc\nb\nd\n";
        String expected = "a\nb\nc\nd\n";

        DuplicateRemover remover = new DuplicateRemover();

        String actual = remover.stripDuplicates(input);
        assertEquals(expected, actual);
    }

    @Test
    public void removesDuplicateLinesUnalphabetized() {
        String input = "z\nb\nc\nb\nz\n";
        String expected = "z\nb\nc\n";

        DuplicateRemover remover = new DuplicateRemover();

        String actual = remover.stripDuplicates(input);
        assertEquals(expected, actual);
    }

}
Mike
  • 17,553
  • 10
  • 52
  • 71
1

Here's another solution. Let's just use UNIX!

cat MyFile.java | uniq > MyFile.java

Edit: Oh wait, I re-read the topic. Is this a legal solution since I managed to be language agnostic?

Mike
  • 17,553
  • 10
  • 52
  • 71
  • I suppose you can use something like the solutions here: http://stackoverflow.com/questions/1088113/is-there-a-java-library-of-unix-functions . I would attempt to write hooks for a script if you're on a UNIX system, though. – Mike May 09 '11 at 02:13
0

here I'm using a hashset to store seen lines

Scanner scan;//input
Set<String> lines = new HashSet<String>();
StringBuilder strb = new StringBuilder();
while(scan.hasNextLine()){
    String line = scan.nextLine();
    if(lines.add(line)) strb.append(line);
}
ratchet freak
  • 44,814
  • 5
  • 55
  • 99