0

I have a Rails 4 application that allows to upload videos using the jQuery Dropzone plugin and the paperclip gem. Each uploaded video is encoded into multiple formats and uploaded to Amazon S3 in the background using delayed_paperclip, av-transcoder and sidekiq gems.

All works fine with most videos, but with a higher size like 1.1GB after the upload reaches what seems like the end of the progress bar of the dropzone plugin it returns an Nginx 504 Gateway Time-out.

As far as server goes, the rails app runs on Nginx + Passenger on a couple of servers that are behind a load balancer (Nginx used here too). I do not have timeouts set in the upstream section of the load balancer, the client_max_body_size is set to 2000M (both on the load balancer and servers), I've tried setting passenger_pool_idle_time to a large value (600), that didn't help, I have also tried setting send_timeout (600s), nothing made any difference.

Note: When making those changes, I did them on the host files of both servers as well as of the load balancer and always restarted nginx afterwards.

I've read also several answers regarding similar problems like this one and this one but still can't figure this out, google wasn't much more helpful either.

Some extra notes for those unfamiliar with the whole paperclip/delayed_paperclip process, the file is uploaded to the server and then the operation is done as far as the user is concerned, in the background the post processing of the videos (encoding/uploading to S3) is pushed to Redis as a job and Sidekiq processes it whenever it has time/resources.

What could be causing this issue? How can I debug this and solve it?

UPDATE

Thanks to Sergey's answer I was able to solve the issue. Since I was restricted to a specific version of Paperclip, I couldn't update it to the newest version that has the fix, therefore I'll leave here what I ended up doing.

In the engine that I use to handle the uploads I've added the following code in the engine_name.rb file to override the methods from Paperclip that needed fixing:

  Paperclip::AbstractAdapter.class_eval do
    def copy_to_tempfile(src)
      link_or_copy_file(src.path, destination.path)
      destination
    end

    def link_or_copy_file(src, dest)
      Paperclip.log("Trying to link #{src} to #{dest}")
      FileUtils.ln(src, dest, force: true) # overwrite existing
      @destination.close
      @destination.open.binmode
    rescue Errno::EXDEV, Errno::EPERM, Errno::ENOENT => e
      Paperclip.log("Link failed with #{e.message}; copying link #{src} to #{dest}")
      FileUtils.cp(src, dest)
    end
  end

  Paperclip::AttachmentAdapter.class_eval do
    def copy_to_tempfile(source)
      if source.staged?
        link_or_copy_file(source.staged_path(@style), destination.path)
      else
        source.copy_to_local_file(@style, destination.path)
      end
      destination
    end
  end

  Paperclip::Storage::Filesystem.class_eval do
    def flush_writes #:nodoc:
      @queued_for_write.each do |style_name, file|
        FileUtils.mkdir_p(File.dirname(path(style_name)))
        begin
          move_file(file.path, path(style_name))
        rescue SystemCallError
          File.open(path(style_name), "wb") do |new_file|
            while chunk = file.read(16 * 1024)
              new_file.write(chunk)
            end
          end
        end
        unless @options[:override_file_permissions] == false
          resolved_chmod = (@options[:override_file_permissions] &~ 0111) || (0666 &~ File.umask)
          FileUtils.chmod( resolved_chmod, path(style_name) )
        end
        file.rewind
      end

      after_flush_writes # allows attachment to clean up temp files

      @queued_for_write = {}
    end

    private

    def move_file(src, dest)
      # Support hardlinked files
      if File.identical?(src, dest)
        File.unlink(src)
      else
        FileUtils.mv(src, dest)
      end
    end

  end
Julien
  • 1,613
  • 15
  • 32
  • Just trying to get this out of the equation. What's the server spec? Have you tried increasing the server resources? I know from experience that paperclip is a very demanding process even with images. I've recently added video capabilities to one of my apps ( https://games.directory GIF to MP4) and I had to scale because of the load paperclip was producing while decoding the GIF. I'm also using nginx but with Rails 5 and Puma. – Vlad Sep 28 '16 at 07:05
  • CPU: 8 RAM: 8GB HDD: 50GB – Julien Sep 28 '16 at 15:24
  • Did you manage to solve the issue? I faced the same one and can’t figure out the solution. – Sergey Mell Jan 24 '18 at 13:11

1 Answers1

0

I faced similar issue a while ago. Maybe, my experience will help.

We had m3.medium instance on Amazon with 4Gb of memory. User could be able to upload large video files. We faced an issue of 504 error when uploading files larger than 400Mb.

During monitoring and logging the upload process it appeared that Paperclip creates 4 files per attachment and thus all the instance resources work on a file system. Here there is a description of this problem

https://github.com/thoughtbot/paperclip/issues/1642

and proposed a solution - use links instead of files when possible. You can see the appropriate code changes here

https://github.com/arnonhongklay/paperclip/commit/cd80661df18d7cd112944bfe26d90cb87c928aad

However 2 days ago Paperclip was updated to 5.2.0 version and they implemented similar solution. So for now it creates only one file per attachment. Thus our file system is not overloaded and after updating to 5.2.0 version we stopped receiving 504 error.

Conclusion:

  1. Use monkey patch from the link attached above if you're restricted in Paperclip version for some reason
  2. Update Paperclip to 5.2.0 version. Should help.
Sergey Mell
  • 6,724
  • 1
  • 20
  • 45