Rails & PostgreSQL with CapRover on DigitalOcean Droplet (Part III)

Robert Guiscard
5 min readDec 30, 2019

--

You can use docker images to check the size of image. The docker image for such simple Rails application is about 650MB, quite large compared to others (Captain image is 1G, though). To shrink it, we can try multi-stage builds. A good template can be found in an article in Japanese. While you may not be able to read the article, the docker file inside can explain most of steps. We can rewrite our dockerfile accordingly:

FROM ruby:2.6.5-alpine AS gemENV RAILS_ENV productionWORKDIR /myappRUN apk add --update --no-cache nodejs yarn postgresql-client postgresql-dev tzdata build-base# install gems
COPY Gemfile .
COPY Gemfile.lock .
RUN bundle install --deployment --without development test
# install npm packages
COPY package.json .
COPY yarn.lock .
RUN yarn install --frozen-lockfile
# compile assets
COPY Rakefile .
COPY bin bin
COPY .browserslistrc .
COPY postcss.config.js .
COPY babel.config.js .
COPY config config
COPY app/assets app/assets
COPY app/javascript app/javascript
# Assets, to fix missing secret key issue during building
RUN SECRET_KEY_BASE=dumb bundle exec rails assets:precompile
FROM ruby:2.6.5-alpineENV RAILS_ENV production
ENV RAILS_LOG_TO_STDOUT 1
ENV RAILS_SERVE_STATIC_FILES 1
WORKDIR /myappRUN apk add --update --no-cache postgresql-client postgresql-dev tzdataCOPY . /myapp
COPY --from=gem /usr/local/bundle /usr/local/bundle
COPY --from=gem /myapp/vendor/bundle /myapp/vendor/bundle
COPY --from=gem /myapp/public/assets /myapp/public/assets
COPY --from=gem /myapp/public/packs /myapp/public/packs
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 80
# Start the main process.
WORKDIR /myapp
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]

It basically runs everything in one image, then copy only the necessary parts to a new image and run. By this method, all the intermediate files will not be carried to the new image. The image size is reduced from 650MB to 250MB now. Note that due to the use of — deployment for bundle install, gems is installed under vendor/bundle. Therefore, we need to copy both /usr/local/bundle and vendor/bundle. The former contains bundle config and the later the actually gems.

We can also use .dockerignore to avoid copy unnecessary files. Just put it under the root directory of project:

.bundle
.dockerignore
.git
.gitignore
Dockerfile
log
node_modules
public/assets
public/packs
README.md
storage
test
tmp

This article also deletes some intermediate files in gems after the command of asset compliation:

RUN SECRET_KEY_BASE=dumb bundle exec rails assets:precompile \
&& find vendor/bundle -name "*.c" -delete \
&& find vendor/bundle -name "*.o" -delete

This cut down the image size to 150MB.

— — —

Once those settings works for simple Rails application, you can try to apply it to a more complicated Rails application. Here are some issues I met and they might be relevant to you.

The assets compilation may require the whole Rails application to work for some reasons. So instead of copying individual files before assets:compilation, I need to do COPY . /myapp to copy the whole application source code. Then it causes another problem of missing RAILS_MASTER_KEY because config/environments/production.rb asks for it. In order to keep using SECRET_KEY_BASE for asset compilation, I add another stage called docker_build. Therefore, in config/database.yml, add extra stage:

docker_build:
<<: *default
database: docker_build

Also a new config/environments/docker_build.rb

Rails.application.configure do
config.eager_load = true
end

Turn on master key in production in config/environments/production.rb

# Ensures that a master key has been made available in either ENV["RAILS_MASTER_KEY"]
# or in config/master.key. This key is used to decrypt credentials (and other encrypted files).
config.require_master_key = true

The whole Dockerfile now looks like this:

FROM ruby:2.6.5-alpine AS gemENV RAILS_ENV productionWORKDIR /myappRUN apk add --update --no-cache nodejs yarn postgresql-client postgresql-dev tzdata build-base# install gems
COPY Gemfile .
COPY Gemfile.lock .
RUN bundle install --deployment --without development test
# install npm packages
COPY package.json .
COPY yarn.lock .
RUN yarn install --frozen-lockfile
# compile assets
ENV RAILS_ENV docker_build
COPY . /myapp
# Assets, to fix missing secret key issue during building
RUN SECRET_KEY_BASE=dumb bundle exec rails assets:precompile \
&& find vendor/bundle -name "*.c" -delete \
&& find vendor/bundle -name "*.o" -delete
FROM ruby:2.6.5-alpineENV RAILS_ENV production
ENV RAILS_LOG_TO_STDOUT 1
ENV RAILS_SERVE_STATIC_FILES 1
WORKDIR /myappRUN apk add --update --no-cache postgresql-client postgresql-dev tzdataCOPY . /myapp
COPY --from=gem /usr/local/bundle /usr/local/bundle
COPY --from=gem /myapp/vendor/bundle /myapp/vendor/bundle
COPY --from=gem /myapp/public/assets /myapp/public/assets
COPY --from=gem /myapp/public/packs /myapp/public/packs
# For some reasion, dockerignore does not work properly
RUN rm -rf test \
&& rm -rf vendor/bundle/ruby/2.6.0/cache \
&& rm -rf README.md
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Start the main process.
WORKDIR /myapp
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]

In such arrangement, I can build the image without the master key at docker_build mode while running the application with one in production mode.

— — —

Another problem I have is that the docker container will die if the database does not exists. But to be able to create one, the docker container need to run first. It will not work with docker run -it image_id bundle exec db:create.

Therefore, I add a new task to check the existence of database in lib/tasks/database_exist.rake

namespace :db do
desc "Checks to see if the database exists"
task :exists do
begin
Rake::Task['environment'].invoke
ActiveRecord::Base.connection
rescue
exit 1
else
exit 0
end
end
end

Then add this check in entrypoint.sh like this

#!/bin/sh
set -e
# Remove a potentially pre-existing server.pid for Rails.
rm -f /myapp/tmp/pids/server.pid
bundle exec rake db:exists || bundle exec rake db:create# Then exec the container's main process (what's set as CMD in the Dockerfile).
exec "$@"

Whenever there is no database, it will create one. In such way, the docker container can run smoothly. Please note that I didn’t add automatic db:migrate to it. You can do so. It is just a personal habit to do migration manually.

— — —

Heroku support PostgreSQL export and import. We can use it to migrate data from heroku to caprover. First, download the database dump by following heroku guide. Database dump can then by copied to DigitalOcean droplet via scp like this (assuming using different ssh public key):

scp -o "IdentitiesOnly=yes" -i .ssh/another_rsa latest.dump root@your_host_ip:/root

It can then be copied to docker container like this:

docker cp latest.dump container_id:/tmp/latest.dump

Once it is in the container, restore database with this command from heroku guide inside the container by first going into it:

$ docker exec -it container_id /bin/sh# once inside the postgresql container
$ pg_restore --verbose --clean --no-acl --no-owner -U username -d mydb /tmp/latest.dump

You can pipe the dump to docker command without copying it into the container. You can also backup the postgresql database straight to droplet. Read this article for details.

— — —

Another good things of CapRover is that you can run multiple instances through its web interface like this:

Go to “Apps” section and find the Rails application. Change the instance count and save. That’s it !!

--

--

Responses (1)