533

The project that I am working on (Node.js) implies lots of operations with the file system (copying, reading, writing, etc.).

Which methods are the fastest?

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
bonbonez
  • 5,818
  • 2
  • 13
  • 16
  • 46
    It's a good question, though it is interesting that it gets 25 upvotes when other similar format questions will get 3 or 4 downvotes right away for not meeting the SO "standards" (maybe the javascript tag is crawled by kinder people :) – Ben Sep 11 '13 at 11:47
  • 24
    Mostly we're just fresh new and excited about this whole "files" business after years of normalizing browsers. – Erik Reppen Oct 30 '14 at 17:32
  • 4
    The only correct answer on the page is [this one](https://stackoverflow.com/a/46253698/128511). None of the other answers actually copy files. Files on MacOS and Windows have other metadata that is lost by just copying bytes. Examples of data not copied by any other answer on this page, [windows](https://docs.microsoft.com/en-us/windows/desktop/fileio/file-streams) and [macos](https://apple.stackexchange.com/questions/228444/how-do-i-create-a-named-fork-and-store-data-in-it). Even on Unix the other answers don't copy the creation date, something that's often important when copying a file. – gman Oct 23 '18 at 03:58

18 Answers18

802

Use the standard built-in way fs.copyFile:

const fs = require('fs');

// File destination.txt will be created or overwritten by default.
fs.copyFile('source.txt', 'destination.txt', (err) => {
  if (err) throw err;
  console.log('source.txt was copied to destination.txt');
});

If you have to support old end-of-life versions of Node.js - here is how you do it in versions that do not support fs.copyFile:

const fs = require('fs');
fs.createReadStream('test.log').pipe(fs.createWriteStream('newLog.log'));
Benjamin Gruenbaum
  • 246,787
  • 79
  • 474
  • 476
  • 70
    Just remember that in real life, you'd want to check both the `createReadStream` and `createWriteStream` for errors, so you wouldn't get a one-liner (though it would still be just as fast). – ebohlman Jul 04 '12 at 00:37
  • 19
    How much faster/slower is this than executing the raw `cp test.log newLog.log` via `require('child_process').exec`? – Lance Pollard Jan 30 '13 at 20:03
  • 43
    Well `copy` is not portable on Window, contrary to a full Node.js solution. – Jean Jul 03 '13 at 18:51
  • How about closing the files, does this code keep them opened after copy completed? – Oleg Mihailik Jul 03 '13 at 22:12
  • 3
    @OlegMihailik Both streams are closed by default. readstream is closed as such by node. writestream can be remained open if you pass `{ end: false}` to pipe, otherwise will be closed by default. See here http://nodejs.org/api/stream.html#stream_readable_pipe_destination_options – user568109 Jul 09 '13 at 05:44
  • 12
    Unfortunately on my system using streams is extremely slow compared to `child_process.execFile('/bin/cp', ['--no-target-directory', source, target])`. – Robert Sep 25 '13 at 22:27
  • 12
    I used this method and all I got was a blank file on write. any ideas why? `fs.createReadStream('./init/xxx.json').pipe(fs.createWriteStream('xxx.json'));` – Timmerz Aug 20 '14 at 15:23
  • 2
    @Timmerz I had the same issue. The file would be blank and node would just hold a handle to that file forever. I used the fs-extra module's copy method instead – Zain Rizvi Jan 05 '15 at 22:24
  • your code as written cannot be used together with `process.exit` because the latter terminates all IO without waiting for the streams to finish their data exchange – Trident D'Gao Jan 18 '16 at 08:30
  • This is a common way. But if I want do things after copy, I must listen to an event "end". This is not convenient. – Flying Fisher Mar 28 '16 at 06:15
  • I make a copy of a file just after having created it, and it gives a blank file as a result. I'm not sure that using streams to copy files is a good practice. Best is to use `fs-extra.copySync` (see other answer in this thread), it works a lot better. – jck Jul 11 '16 at 16:09
  • @Timmerz I had the same problem. I used the solution given below by Tester and it worked. – nurp Oct 06 '16 at 20:13
  • 1
    Note: "copy a file" implies a complete clone of the file, not just the data within the file. Copying a file would include the creation date of the original. When you're streaming from one file to another, you are copying the data, not the file. – f1lt3r Jul 31 '17 at 07:31
  • 2
    Mikhail 's answer below uses Node's internal `fs.copyFile` function and is the preferred solution: https://stackoverflow.com/a/46253698 – Melle Jul 20 '18 at 09:47
  • this is not "copying a file". Files have things like creation dates (not copied with the code above). Files on MacOS and Windows also have other data streams (not copied with the code above). See the `fs.copyFile` answer. Looking in the node source `fs.copyFile` uses the OS level copy on MacOS and Windows and so should actually copy the files where as the code above mearly creates a new file and copies to bytes to the new file. – gman Oct 23 '18 at 03:53
  • using `copyFile()` is better when you care about privileges, in my case I had to copy an executable to my linux `/tmp/` directory, using the stream solution copy the file with non executable privileges, so the `copyFile()` is the one to go with, It would be great if you mention it in your solution. – Hocine Abdellatif Sep 22 '19 at 18:06
  • 1
    haha @Timmerz you called the file`xxx.json` reminds of a teacher who put the example URL in a demo to xxx.com –  May 28 '20 at 09:56
  • Hi from the project, I edited this answer to reflect the current state of affairs (copyFile existing on every non EoL version of Node.js) - hope that's ok. – Benjamin Gruenbaum Apr 11 '21 at 10:37
296

Same mechanism, but this adds error handling:

function copyFile(source, target, cb) {
  var cbCalled = false;

  var rd = fs.createReadStream(source);
  rd.on("error", function(err) {
    done(err);
  });
  var wr = fs.createWriteStream(target);
  wr.on("error", function(err) {
    done(err);
  });
  wr.on("close", function(ex) {
    done();
  });
  rd.pipe(wr);

  function done(err) {
    if (!cbCalled) {
      cb(err);
      cbCalled = true;
    }
  }
}
Mike Schilling
  • 3,153
  • 1
  • 11
  • 8
  • 5
    It is worth noting that cbCalled flag is needed because pipe errors trigger an error on both streams. Source and destination streams. – Gaston Sanchez Mar 26 '14 at 21:33
  • 4
    How do you handle the error if the source file doesn't exist? Destination file still gets created in that case. – Michel Hua May 18 '14 at 18:04
  • 1
    I think an error in the `WriteStream` will only unpipe it. You would have to call `rd.destroy()` yourself. At least that's what happened to me. Sadly there's not much documentation except from the source code. – Robert Aug 06 '14 at 05:46
  • what does the `cb` stand for? what should we pass in as the third argument? – SaiyanGirl Feb 26 '15 at 19:29
  • 4
    @SaiyanGirl 'cb' stands for "callback". You should pass in a function. – Brian J. Miller Feb 26 '15 at 22:08
  • @Robert , @pilau you could always listen to the open event `rd.on('open', function() {})`, and create the write stream there. – Marc Jan 27 '17 at 12:20
  • @Marc, that's a very good idea, regarding John Poe's question. I can't really remember well anymore, but I think I was talking about something different. Any error while or after opening the source or target file doesn't close the other stream. You can't really only rely on the `open` event. I think in my case I actually wanted to *move* the file and there was an accidental write error at some point, so `copyFile` dutifully returned the error. My app then tried it again successfully and `unlink`ed the source file. But it still showed up in `readdir` because of the open handle. – Robert Jan 28 '17 at 23:01
  • @Mike Schilling, yes, that is fast. But if I try to copy lots of files one after each other within a short time period I get `ERROR: There are some read requests waiting on finished stream`. I guess a stream is unique and instead of waiting for one to finish before starting it just breaks. Any ideas on how to handle this? Thanks – chitzui Aug 10 '17 at 20:39
  • I understood why. Yes it can’t be twice at one. So basically what I did: I created a recursive function that executes the calls one after each other. There goes your performance :D – chitzui Aug 11 '17 at 08:59
  • @Robert `.destroy()` is now the official way to close a readable stream as per the documentation - https://nodejs.org/api/stream.html#stream_readable_destroy_error – ubuntugod Jul 18 '18 at 02:28
145

I was not able to get the createReadStream/createWriteStream method working for some reason, but using the fs-extra npm module it worked right away. I am not sure of the performance difference though.

npm install --save fs-extra

var fs = require('fs-extra');

fs.copySync(path.resolve(__dirname, './init/xxx.json'), 'xxx.json');
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Timmerz
  • 5,746
  • 5
  • 29
  • 44
  • 3
    This is the best option now – Zain Rizvi Jan 05 '15 at 22:23
  • 13
    Using syncronous code in node kills your application performance. – mvillar May 21 '15 at 06:44
  • 5
    Oh please... The question is about *fastest* method to copy a file. While *fastest* is always subjective, I don't think a synchronous piece of code has any business here. – sampathsris Sep 03 '15 at 18:25
  • 26
    Fastest to implement or fastest to execute? Differing priorities mean this is a valid answer. – Patrick Gunderson Sep 23 '15 at 20:35
  • 16
    fs-extra also has asynchronous methods, i.e. `fs.copy(src, dst, callback);`, and these should resolve @mvillar's concern. – Marc Durdin Nov 09 '15 at 10:49
  • 2
    @Krumia I was - am - under the impression that if there is only 1 task at hand it does not make any difference if I use sync or async code ... just checking my knowledge nothing more, nothing less... – Paul Nov 12 '15 at 08:39
  • 2
    @Paul There's plenty of use-cases for synchronous code in the NodeJS ecosystem, so I'm not sure I understand @Krumia's reservation. The one place to absolutely avoid synchronous functions is when responsiveness is paramount (such as when writing a server). Even then, you could utilize synchronous code by forking a new VM with `require('child').fork(...)` since it wouldn't block the main event loop. It's all about context and what you're trying to achieve. – StuffAndThings Nov 12 '15 at 21:05
  • yes, I think if you need performance or non blocking for larger files for example you would definitely want to look into the async methods. – Timmerz Feb 18 '16 at 04:05
  • Thank you. Best solution. – jck Jul 11 '16 at 16:11
  • I tried using fs.copySync(path.resolve(__dirname,'./init/xxx.json'), 'xxx.json'); also I tried using copy method , also createReadStream solution discussed earlier in this thread . Still getting the blank file . File does get copied to the desired folder . If i rename it , it will delete the original file but I need to make multiple copies of the same file . Any ideas how can this be achieved ? – Julie D'Mello Jul 13 '16 at 20:26
142

Since Node.js 8.5.0 we have the new fs.copyFile and fs.copyFileSync methods.

Usage example:

var fs = require('fs');

// File "destination.txt" will be created or overwritten by default.
fs.copyFile('source.txt', 'destination.txt', (err) => {
    if (err) 
        throw err;
    console.log('source.txt was copied to destination.txt');
});
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Mikhail
  • 8,543
  • 7
  • 23
  • 45
  • 4
    This is the only correct answer on the page. None of the other answers actually copy files. Files on MacOS and Windows have other metadata that is lost by just copying bytes. Examples of data not copied by any other answer on this page, [windows](https://docs.microsoft.com/en-us/windows/desktop/fileio/file-streams) and [macos](https://apple.stackexchange.com/questions/228444/how-do-i-create-a-named-fork-and-store-data-in-it). Even on Unix the other answer don't copy the creation date, something that's often important when copying a file. – gman Oct 23 '18 at 03:56
  • well sadly this fails to copy everything on mac. Hopefully they'll fix it: https://github.com/nodejs/node/issues/30575 – gman Nov 22 '19 at 02:43
  • BTW keep in mind that `copyFile()` is bugged while overwriting longer files. Courtesy of `uv_fs_copyfile()` till Node v8.7.0 (libuv 1.15.0). see https://github.com/libuv/libuv/pull/1552 – Anton Rudeshko Mar 03 '20 at 11:02
76

Fast to write and convenient to use, with promise and error management:

function copyFile(source, target) {
  var rd = fs.createReadStream(source);
  var wr = fs.createWriteStream(target);
  return new Promise(function(resolve, reject) {
    rd.on('error', reject);
    wr.on('error', reject);
    wr.on('finish', resolve);
    rd.pipe(wr);
  }).catch(function(error) {
    rd.destroy();
    wr.end();
    throw error;
  });
}

The same with async/await syntax:

async function copyFile(source, target) {
  var rd = fs.createReadStream(source);
  var wr = fs.createWriteStream(target);
  try {
    return await new Promise(function(resolve, reject) {
      rd.on('error', reject);
      wr.on('error', reject);
      wr.on('finish', resolve);
      rd.pipe(wr);
    });
  } catch (error) {
    rd.destroy();
    wr.end();
    throw error;
  }
}
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
benweet
  • 3,445
  • 1
  • 19
  • 26
  • 1
    What happens when no more input exists (broken network share), but the write still succeeds? Will both reject (from read) and resolve (from write) be called? What if both read/write fails (bad disk sectors during read, full disk during write)? Then reject will be called twice. A Promise solution based on Mike's answer with a flag (unfortunately) seems to be the only viable solution that properly considers error handling. – Lekensteyn May 24 '15 at 13:15
  • The promise is resolved once the copy succeed. If it's rejected, its state is settled and calling reject multiple times won't make no difference. – benweet May 24 '15 at 17:08
  • 2
    I just tested `new Promise(function(resolve, reject) { resolve(1); resolve(2); reject(3); reject(4); console.log("DONE"); }).then(console.log.bind(console), function(e){console.log("E", e);});` and looked up the [spec](http://people.mozilla.org/~jorendorff/es6-draft.html#sec-promise-objects) on this and you are right: *Attempting to resolve or reject a resolved promise has no effect.* Perhaps you could extend your answer and explain why you have written the function in this way? Thanks :-) – Lekensteyn May 24 '15 at 22:42
  • 2
    By the way, `close` should be `finish` for Writable streams. – Lekensteyn May 25 '15 at 09:32
  • And if you wonder why your application never closes after pipe errors on `/dev/stdin`, that is a bug https://github.com/joyent/node/issues/25375 – Lekensteyn May 27 '15 at 09:26
  • @Lekensteyn, you are right. 'close' should be 'finish' according to [official API](https://nodejs.org/api/stream.html#stream_event_finish). – Madwyn Jun 10 '15 at 17:05
  • According to API: "One important caveat is that if the Readable stream emits an error during processing, the Writable destination is not closed automatically. If an error occurs, it will be necessary to manually close each stream in order to prevent memory leaks." – BMiner Sep 23 '16 at 18:29
  • It seems that the "finish" event is unfortunately also send if the stream could not be written because of access restrictions. "finish" doesn't mean "success" – devarni Oct 09 '16 at 16:24
  • @devarni wha happens in that case? I mean the file is not written and a "finish" event will be fired? – loretoparisi Mar 22 '17 at 23:02
45

Well, usually it is good to avoid asynchronous file operations. Here is the short (i.e. no error handling) sync example:

var fs = require('fs');
fs.writeFileSync(targetFile, fs.readFileSync(sourceFile));
Tester
  • 827
  • 6
  • 2
  • 10
    To say that in general is extremely false, particularly since it leads to people re-slurping files for every request made to their server. This can get expensive. – Catalyst Jun 19 '14 at 16:14
  • 9
    using the `*Sync` methods are totally against nodejs' philosphy! I also think they are slowly being deprecated. The whole idea of nodejs is that it's single threaded and event-driven. – gillyb Oct 14 '14 at 21:15
  • 11
    @gillyb The only reason I can think of for using them is for simplicity - if you are writing a quick script that you will only use once, you probably aren't going to be all that bothered about blocking the process. – starbeamrainbowlabs Oct 24 '14 at 07:23
  • 13
    I'm not aware of them being deprecated. Sync methods are almost always a terrible idea on a web server but sometimes ideal in something like node-webkit where it only locks up action in the window while files are copying. Throw up a loading gif and maybe a load bar that updates at certain points and let sync methods block all action until the copying is done. It's not really a best practice thing so much as a when and where they have their place thing. – Erik Reppen Oct 30 '14 at 17:42
  • 2
    Keep in mind that this works ONLY if sourceFile fits into memory (i.e.: don't do this with huge files). – david_p Apr 03 '15 at 14:16
  • 1
    To give an example for using synchronous copying: I have a npm script that needs a config file. If none exists yet, I copy a default one to the config file location before require'ing the config file. When using the asynchronous copy, the require doesn't see the new config file.. – daniel kullmann Jul 08 '15 at 13:19
  • 6
    Sync methods are fine when you are interacting with another sync operation or what you want is to perform sequential operation (ie. you would be emulating sync anyway). If the operations are sequential just avoid the callback hell (and/or promise soup) and use the sync method. In general they should be used with caution on servers but are fine for most cases that involve CLI scripts. – srcspider Jul 31 '15 at 09:55
  • for tools and build operations it just bloats your code to use syncronous stuff. good solution. – Flex Elektro Deimling Jan 13 '17 at 20:59
  • i downvoted this but later ended up using it as best answer - apologies - it wont let me upvote - if u edit it it will let me! – danday74 Mar 08 '17 at 20:22
  • Yeah I don't think it's helpful to EVER say something like 'a synchronous op is preferable in nodejs'. That's contrary to what is pretty much rule #1 in node: make everything async where possible. Emphasis is on "where possible". If your file copy is required for subsequent logic then yes, it should be sync (and even then, maybe not). But if you're spitting out a log that will be used later or something unrelated to the current application then it's 100% preferable. For those beginners out there, just understand, contrary to this answer, IT IS USUALLY BAD TO AVOID ASYNC OPS IN NODE. – dudewad Oct 23 '17 at 16:17
  • @danielkullmann OFC it won't see the config file if you write the require just after the asynchronous method, that is how wrong you may have design your process in the first place, the whole asynchronous philosophy is "do something when I finish", that means you have to call your require when the Promise resolves... – vdegenne Apr 05 '18 at 20:45
  • @gillyb the `*Sync` being deprecated ?... hum... that's why `fs.exists()` is deprecated and `fs.existsSync()` is not. lol. – vdegenne Apr 05 '18 at 20:53
  • 1
    Asynchronous operations are appropriate on web servers, where node.js can handle other http requests while the file is being copied. Using synchronous operations is easier, though, and they are appropriate in command-line scripts in which your script is the only one running in the process. – Qwertie Jul 03 '18 at 00:20
  • asynchronous operations drastically complicate your code. If you follow the "synchronous is bad on a web server" dogma and it takes you ten times longer to write worse code and speed isn't even an issue then you've just written unmaintainable code for no reason. Maybe do some speed tests and see if the extra 20ms it buys you really matter. Also, are you caching the files anyways? I am selling async framework pipe grease for 1BTC/Kb if anyone needs some. – user875234 Nov 17 '18 at 03:39
  • How is it better than `fs.copyFileSync`? – Royi May 13 '20 at 06:14
19

If you don't care about it being async, and aren't copying gigabyte-sized files, and would rather not add another dependency just for a single function:

function copySync(src, dest) {
  var data = fs.readFileSync(src);
  fs.writeFileSync(dest, data);
}
qntm
  • 3,532
  • 4
  • 22
  • 38
Andrew Childs
  • 2,798
  • 1
  • 16
  • 17
  • 8
    @RobGleeson, and requires as much memory as the file content... I am amazed by the count of upvotes there. – Konstantin Jun 19 '17 at 18:25
  • I've added a "and aren't copying gigabyte-sized files" caveat. – Andrew Childs Nov 22 '17 at 17:16
  • The `fs.existsSync` call should be omitted. The file could disappear in the time between the `fs.existsSync` call and the `fs.readFileSync` call, which means the `fs.existsSync` call doesn't protect us from anything. – qntm Aug 08 '19 at 13:53
  • Additionally, returning `false` if `fs.existsSync` fails is likely poor ergonomics because few consumers of `copySync` will think to manually inspect the return value every time it's called, any more than we do for `fs.writeFileSync` *et al.*. Throwing an exception is actually preferable. – qntm Jan 18 '20 at 15:02
  • The OP does not specifically mention that their files are UTF-8 text, so I'm removing the `'utf-8'` encoding from the snippet too, which means this will now work on any file. `data` is now a `Buffer`, not a `String`. – qntm Mar 13 '20 at 18:06
18

Mike Schilling's solution with error handling with a shortcut for the error event handler.

function copyFile(source, target, cb) {
  var cbCalled = false;

  var rd = fs.createReadStream(source);
  rd.on("error", done);

  var wr = fs.createWriteStream(target);
  wr.on("error", done);
  wr.on("close", function(ex) {
    done();
  });
  rd.pipe(wr);

  function done(err) {
    if (!cbCalled) {
      cb(err);
      cbCalled = true;
    }
  }
}
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Jens Hauke
  • 199
  • 1
  • 5
5
   const fs = require("fs");
   fs.copyFileSync("filepath1", "filepath2"); //fs.copyFileSync("file1.txt", "file2.txt");

This is what I personally use to copy a file and replace another file using Node.js :)

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
AYO O.
  • 79
  • 1
  • 1
  • 2
    This does not answer the question, which is about how to efficiently copy files in an IO-heavy application. – Jared Smith Aug 10 '19 at 13:26
  • 1
    @JaredSmith True, but my google search lead me here and this is what I wanted. – codepleb Nov 28 '19 at 14:22
  • I wonder why copyFileSync in an async function wouldn't perform well. I would think it would be optimized to match copyFile or stream copying. – TamusJRoyce Feb 10 '21 at 00:06
3

You may want to use async/await, since node v10.0.0 it's possible with the built-in fs Promises API.

Example:

const fs = require('fs')

const copyFile = async (src, dest) => {
  await fs.promises.copyFile(src, dest)
}

Note:

As of node v11.14.0, v10.17.0 the API is no longer experimental.

More information:

Promises API

Promises copyFile

Tamas Szoke
  • 4,128
  • 3
  • 19
  • 30
1

Use Node.js's built-in copy function

It provides both async and sync version:

const fs = require('fs');

// File "destination.txt" will be created or overwritten by default.
fs.copyFile('source.txt', 'destination.txt', (err) => {
  if (err) 
      throw err;
  console.log('source.txt was copied to destination.txt');
});

fs.copyFileSync(src, dest[, mode])

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Xin
  • 22,636
  • 12
  • 71
  • 68
1

For fast copies you should use the fs.constants.COPYFILE_FICLONE flag. It allows (for filesystems that support this) to not actually copy the content of the file. Just a new file entry is created, but it points to a Copy-on-Write "clone" of the source file.

To do nothing/less is the fastest way of doing something ;)

https://nodejs.org/api/fs.html#fs_fs_copyfile_src_dest_flags_callback

let fs = require("fs");

fs.copyFile(
  "source.txt",
  "destination.txt",
  fs.constants.COPYFILE_FICLONE,
  (err) => {
    if (err) {
      // TODO: handle error
      console.log("error");
    }
    console.log("success");
  }
);

Using promises instead:

let fs = require("fs");
let util = require("util");
let copyFile = util.promisify(fs.copyFile);


copyFile(
  "source.txt",
  "destination.txt",
  fs.constants.COPYFILE_FICLONE
)
  .catch(() => console.log("error"))
  .then(() => console.log("success"));
chpio
  • 565
  • 2
  • 12
  • 1
    `fs.promises.copyFile` – gman Nov 24 '19 at 08:55
  • 1
    Re *"To do nothing/less is the fastest way of doing something"*: Yes, indeed. That is the first rule of optimisation - ***eliminate unnecessary operations***. That is in contrast to make the existing ones go faster, e.g. by fiddling with compiler flags. – Peter Mortensen Oct 27 '20 at 22:57
0

benweet's solution, but also checking the visibility of the file before copy:

function copy(from, to) {
    return new Promise(function (resolve, reject) {
        fs.access(from, fs.F_OK, function (error) {
            if (error) {
                reject(error);
            } else {
                var inputStream = fs.createReadStream(from);
                var outputStream = fs.createWriteStream(to);

                function rejectCleanup(error) {
                    inputStream.destroy();
                    outputStream.end();
                    reject(error);
                }

                inputStream.on('error', rejectCleanup);
                outputStream.on('error', rejectCleanup);

                outputStream.on('finish', resolve);

                inputStream.pipe(outputStream);
            }
        });
    });
}
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Pedro Rodrigues
  • 1,110
  • 12
  • 17
0

You can do it using the fs-extra module very easily:

const fse = require('fs-extra');

let srcDir = 'path/to/file';
let destDir = 'pat/to/destination/directory';

fse.moveSync(srcDir, destDir, function (err) {

    // To move a file permanently from a directory
    if (err) {
        console.error(err);
    } else {
        console.log("success!");
    }
});

Or

fse.copySync(srcDir, destDir, function (err) {

     // To copy a file from a directory
     if (err) {
         console.error(err);
     } else {
         console.log("success!");
     }
});
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Arya
  • 2,140
  • 1
  • 9
  • 27
0

I wrote a little utility to test the different methods:

https://www.npmjs.com/package/copy-speed-test

run it with

npx copy-speed-test --source someFile.zip --destination someNonExistentFolder

It does a native copy using child_process.exec(), a copy file using fs.copyFile and it uses createReadStream with a variety of different buffer sizes (you can change buffer sizes by passing them on the command line. run npx copy-speed-test -h for more info).

Roaders
  • 3,916
  • 4
  • 39
  • 65
-1

Mike's solution, but with promises:

const FileSystem = require('fs');

exports.copyFile = function copyFile(source, target) {
    return new Promise((resolve,reject) => {
        const rd = FileSystem.createReadStream(source);
        rd.on('error', err => reject(err));
        const wr = FileSystem.createWriteStream(target);
        wr.on('error', err => reject(err));
        wr.on('close', () => resolve());
        rd.pipe(wr);
    });
};
mpen
  • 237,624
  • 230
  • 766
  • 1,119
-1

Improvement of one other answer.

Features:

  • If the dst folders do not exist, it will automatically create it. The other answer will only throw errors.
  • It returns a promise, which makes it easier to use in a larger project.
  • It allows you to copy multiple files, and the promise will be done when all of them are copied.

Usage:

var onePromise = copyFilePromise("src.txt", "dst.txt");
var anotherPromise = copyMultiFilePromise(new Array(new Array("src1.txt", "dst1.txt"), new Array("src2.txt", "dst2.txt")));

Code:

function copyFile(source, target, cb) {
    console.log("CopyFile", source, target);

    var ensureDirectoryExistence = function (filePath) {
        var dirname = path.dirname(filePath);
        if (fs.existsSync(dirname)) {
            return true;
        }
        ensureDirectoryExistence(dirname);
        fs.mkdirSync(dirname);
    }
    ensureDirectoryExistence(target);

    var cbCalled = false;
    var rd = fs.createReadStream(source);
    rd.on("error", function (err) {
        done(err);
    });
    var wr = fs.createWriteStream(target);
    wr.on("error", function (err) {
        done(err);
    });
    wr.on("close", function (ex) {
        done();
    });
    rd.pipe(wr);
    function done(err) {
        if (!cbCalled) {
            cb(err);
            cbCalled = true;
        }
    }
}

function copyFilePromise(source, target) {
    return new Promise(function (accept, reject) {
        copyFile(source, target, function (data) {
            if (data === undefined) {
                accept();
            } else {
                reject(data);
            }
        });
    });
}

function copyMultiFilePromise(srcTgtPairArr) {
    var copyFilePromiseArr = new Array();
    srcTgtPairArr.forEach(function (srcTgtPair) {
        copyFilePromiseArr.push(copyFilePromise(srcTgtPair[0], srcTgtPair[1]));
    });
    return Promise.all(copyFilePromiseArr);
}
ch271828n
  • 5,683
  • 3
  • 24
  • 44
-2

All previous solutions that do not check an existence of a source file are dangerous... For example,

fs.stat(source, function(err,stat) { if (err) { reject(err) }

Otherwise there is a risk in a scenario in case the source and target are by a mistake replaced, your data will be permanently lost without noticing any error.

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
stancikcom
  • 30
  • 3
  • This also has a race condition: the file could be destroyed between stat-ing it and reading/writing/copying. It's always better to just try the operation and deal with any resulting error. – Jared Smith Aug 10 '19 at 13:28
  • checking existence of target before a write operation ensures you do not overwrite the target by accident e.g. covers a scenario that destination and source are set by user by mistake the same... it is then late to wait the write operation to fail... whosever gave me (-1) please review your ranking once this incident happens in your project :-) re. races - on heavy trafic sites is always recommended to have one process handling operations requiring sync assurance - yes it is then performance bottleneck – stancikcom Aug 12 '19 at 09:26
  • I didn't downvote because you're *wrong*, I downvoted because this isn't an answer to the question. It should be a cautionary comment on an existing answer. – Jared Smith Aug 12 '19 at 14:43
  • well - you a right e.g. andrew childs solution (with 18 upvotes) will run out on resources on a server / large files... i would write comments to him but I dont have reputation to comment - therefore you have seen my post standalone.... but Jared your downgrade means a simple takeway for me - keep silent and let people write and share dangerous code that mostly "works" ... – stancikcom Aug 13 '19 at 16:55
  • 1
    I get it, no one *likes* negative feedback. But it's just a downvote. I stand by my reason for giving it, as this does not answer the question the OP asked and is short enough to be a comment. You can take it however you want, but if you blow that sort of thing out of proportion you are going to find stack overflow to be a very frustrating experience. – Jared Smith Aug 13 '19 at 18:01