Deploying Dart Apps to Linux

Basic guidelines for running Dart servers on VPS, on premises, or anywhere running Linux…

So, you’ve written a backend for your app, and you’re ready to launch and debut your product to the world. There’s just one problem, though – you don’t have it running on a server yet, nor do you know how to get that done.

No need to fear – this article will take you from “noob” to “I have at least some semblance of knowing what I’m doing,” in no time flat.

Future installments will include instructions on deployment using tools like Docker.


Step 1. Getting a server

Step 2. Getting the hell away from the root account

Step 3. Putting up a firewall

Step 4. Installing Dart

Step 4a. Installing a specific version of Dart

Step 5. Getting your code onto the server

Step 6. Running your Dart code as a daemon

Step 7. Reverse proxying via nginx

Step 8. Getting an HTTPS certificate via Letsencrypt

Step 9. Enabling HTTP/2


What are our goals?

For the purposes of this article, I will assume that when you say, “deploy,” you mean something along the lines of:

  • Running a program (your backend) continuously, 24/7
  • Running that program on startup, automatically
  • Restarting that program if it crashes
  • Obtaining logs and error messages when that program crashes
  • Ensuring that the server is not hacked/compromised
  • Making the server as fast as possible
  • Serving the application over HTTPS and HTTP/2
  • Caching static files

In this tutorial, we make heavy use of apt-get, systemd, and nginx.

Step 1. Getting a server

Of course, you’ll first need a server to deploy your application to. If you have a physical device (i.e. Linux laptop/desktop/computer, as well keyboard and screen), you can continue to the next step.

If not, you’ll need a server to which you have network access. You must be able to ssh into the server as root (or a sudo user) to be able to continue with this tutorial.

If you’re using DigitalOcean (if not, here’s my referral link), click the “Create” button at the top of the page, and then click “Droplet”:

The top right of the dashboard.

You’ll then want to pick a Linux distribution to use, and pick the specifications for your droplet. I typically pick Ubuntu 16.04.x. For this demo, I went with the smallest size possible, a $5/mo box. You might want to consider enabling backups for your instance, as well, especially if you’ll be holding critical data that you don’t want to lose permanently in case of a mishap.

Picking a distribution and droplet size.

I highly recommend providing an SSH key; otherwise, you’ll be given a password with which to log into your droplet (and this password will be emailed to you, in plain text). Even if you opt for a password, if you even slightly care about not getting compromised, add your SSH key to the server manually, and get rid of password login ASAP.

Adding SSH key(s) to the new droplet.

Once you’re done, you can then remotely access the shell on your server by running ssh root@<the ip> in your terminal.

Step 2: Getting the hell away from the root account

Assuming the absolute worst case, your server will get hacked by an evil genius, and you wrote your backend in such a way that the attacker is able to execute arbitrary commands at whim. You might as well pack your bags and start hunting for a new job, because root access would give the hacker impunity to do literally anything on the box, whether it’s stealing data, deleting the server code, or hosting a botnet on your tab.

We’ll make two users; one will have sudo privileges, and be the one which we’ll use for administrative tasks for the remainder of this tutorial. The other will be unprivileged; we’ll run the Web server as this user, so that even if the application is compromised, an attacker poses much less of a threat. They will be named, for the purposes of this demo, sysadmin and web, respectively. The names are up to you – feel free to change them.

sudo adduser --disabled-password sysadminsudo adduser --disabled-password websudo usermod -aG sudo sysadmin

Note that we disabled passwords for both users. We’ll only be able to log into them via su, or ssh. We’ll then need to our SSH key to both users (or sysadmin at least), so that we can log in.

On your local system (likely the computer you are reading this article on), run the following command to view your SSH public key (note that if you saved it as anything other than, you’ll have to change this command):

# Windows users, ignore this and keep reading!cat ~/.ssh/

Windows users who do not have MinGW, Cygwin, or a similar environment installed will need to open %USERPROFILE%/.ssh/ in Notepad (assuming that’s where you saved the file).

Copy the contents.

Back on the deployment server, run the following to log into sysadmin and enable SSH access:

# Log in as `sysadmin`sudo su - sysadmin# Create the SSH dir if it doesn't exist, and prevent other users from accessing it.mkdir -p ~/.sshchmod -R 700 ~/.ssh# Create the `authorized_keys` file, and only allow read/writetouch ~/.ssh/authorized_keyschmod 600 ~/.ssh/authorized_keys

To actually whitelist your public key, you’ll need to edit ~/.ssh/authorized_keys and paste in the contents of your local ~/.ssh/ You can accomplish by running nano ~/.ssh/authorized_keys (or vim, emacs, etc.), and entering the contents.

However, there’s still one thing wrong – using sudo will still prompt us for a password, even though sysadmin has none. We’ll need to run the visudo command to allow the user to run sudo without being prompted for a password.

Right under the line that reads # User privilege specification, add the following:


Save the file. sysadmin is now capable of running sudo commands without a password prompt.

At this point, you can also remove your SSH key from root‘s authorized_keys file, if you want to entirely prevent network logins as root.

Now that we have our sysadmin account hooked up to ssh, we can close our root session, and log in as sysadmin. Otherwise, you can continue as-is, since in the above command series, we already logged into the sysadmin account.

Step 3. Putting up a firewall

Your server has a public IP address. The thing about IP addresses is, they follow a very predictable pattern. And bots really like predictable patterns. If you don’t put up a firewall, expect to be bombarded by pings to common ports, like 25, 110, etc. Even after we put up the firewall, there will still be bots trying to access the ports we’ve opened, but we’ll be well-equipped to deny malicious traffic at those endpoints.

Run the following as sysadmin to configure the ufw firewall to only allow access to ports 22, 80, and 443 (SSH, HTTP, and HTTPS, respectively):

sudo ufw allow sshsudo ufw allow httpsudo ufw allow httpssudo ufw enable # Turn on the firewall

Step 4. Installing Dart

We’ll install Dart system-wide, but will only add it to the unprivileged user’s PATH. This will help us avoid accidentally running our Web application on an account with super-user privileges.

Following the official instructions, do the following to add the dart package to apt‘s cache:

sudo apt-get updatesudo apt-get install apt-transport-httpssudo sh -c 'curl | apt-key add -'sudo sh -c 'curl > /etc/apt/sources.list.d/dart_stable.list'sudo apt-get update

To install the latest version of dart, run apt-get install dart. To install a specific stable version, i.e. 1.24.0, run apt-get install dart=1.24.0-1. To install a specific dev version, see step 4a below.

Step 4a. Installing a specific version of Dart

You can see which versions of Dart are available for installation by running apt-cache madison dart. If the version you want (i.e. a dev version) is not available, you’ll have to do a bit more work here.

The URL to the .deb file you’re looking for is:<version>/linux_packages/dart_<version>-1_amd64.deb

Note that if you’re looking for a stable version, change dev to stable. However, all stable versions of Dart are available on apt.

Run the following to install it:

wget -O dart.deb "<the url here>"sudo dpkg -i dart.deb

Step 5. Getting your code onto the server

The exact actions here depend on how you’re storing your application code. Regardless of what you do, perform the following first to act as the unprivileged user and add dart and pub to the PATH:

# Log insudo su - web# Add Dart to the PATHecho 'export PATH="/usr/lib/dart/bin:$PATH"' >> ~/.bashrcsource ~/.bashrcexit # Return to sysadmin

Next, get your code onto the server. You might be using git, in which case you can simply do a git clone. You might also not being using version control at all, in which case you can use the sftp shell. For this demo, I put my files in ~web/app. Regardless of where you put the files, make sure you remember the absolute path. We’ll be using it somewhere down the road.

You should probably commit your pubspec.lock file. This way, the deployment will get the same exact versions of your dependencies that you’ve been developing and testing against, so that you’re not surprised in production. Download your dependencies. A $5/mo box doesn’t have that much RAM, so add the --no-precompile flag to pub get, because you usually don’t need it in production, and the amount of RAM it requires is likely more than what’s available on your tiny box.

pub get --no-precompile

Finally, we can exit web for good. We won’t need to do anything else as the unprivileged user (it can’t really do much to begin with).

Note: Your Dart web server should listen on, not

Step 6. Running your Dart code as a daemon

Using systemd, we can configure our server to run our Dart backend 24/7 in the background, restarting it on crashes and system startup. Write the following to /etc/systemd/system/my_app.service, using the editor of your choice:

[Unit]Description=My Dart app[Service]User=webWorkingDirectory=/home/web/appExecStart=/usr/lib/dart/bin/dart bin/main.dartRestart=always# Remove this line if using Angel:# Environment=ANGEL_ENV=production[Install]

Make sure to use sudo when editing the file, or you will not be permitted to save your changes. In addition, edit the above configuration with the paths relevant for your application.

Next, enable the service, and start it as a daemon. It will be able to use start, status, stop, etc., just like any other Linux service:

sudo systemctl enable my_appsudo systemctl daemon-reloadsudo service my_app start

You can view the last 50 lines of stdout/stderr/logs by running sudo journalctl -u my_app.service -n 50. Assuming your application runs at port 3000, you can then run curl localhost:3000 and see that your daemon is, in fact running.

When you update the code you’re running, you’ll simply have to git pull (or however you’re getting the code onto the server), and then run sudo service my_app restart. If you’ve changed dependencies, then, of course, you’ll need to run the necessary commands, like pub get --no-precompile again. If you find yourself doing this a lot, then down the road, you might consider automating this (keyword: continuous integration).

At this point, we’ve achieved most of the goals outlined at the start of this article. All that’s left is to set up the front-facing Web server.

Step 7. Reverse proxying via nginx

First, let’s install nginx:

sudo apt-get updatesudo apt-get install nginx

Next, we’ll create a basic configuration that forwards HTTP traffic to port 3000. It will check if a file exists in /home/web/app/web first, though. This way, we can serve (and cache!) static files, and still have our application server receive traffic. Using sudo, write the following to /etc/nginx/sites-available/my_app.conf:

server {  listen 80 default_server;  root /home/web/app/web;    location / {    try_files $uri @proxy;  }    location @proxy {    proxy_pass;    proxy_http_version 1.1; # Important, do not omit  }}

Next, run these commands to enable the configuration we just created, and refresh nginx:

# Remove the default configurationsudo unlink /etc/nginx/sites-enabled/default# Enable your configurationsudo ln -s /etc/nginx/sites-available/my_app.conf /etc/nginx/sites-enabled/my_app.conf# Reload nginxsudo service nginx reload

Your server is effectively now live – enter its IP into your Web browser to see it in the wild!

Note: You might consider adding a server_name directive to the nginx configuration.

Step 8. Getting an HTTPS certificate via Letsencrypt

Obtaining an HTTPS certificate is easier now than ever, thanks to the amazing work by the folks behind Letsencrypt. So why not use it?

As per the official instructions, run the following to make certbot available on the system:

sudo apt-get updatesudo apt-get install software-properties-commonsudo add-apt-repository universesudo add-apt-repository ppa:certbot/certbotsudo apt-get updatesudo apt-get install certbot python-certbot-nginx 

Run sudo certbot to get an HTTPS certificate for your domain (if you didn’t set a server_name in nginx, but plan to point a domain to this app, now is the time to do so). Pick the “Redirect to HTTPS” option, so that incoming requests are always sent through HTTPS.

Step 9. Enabling HTTP/2

The final step is maybe the easiest. Enabling HTTP/2 can help to improve load times for users of our application. Find the following line in your my_app.conf:

listen 443 ssl; # managed by Certbot

And change it to this:

listen 443 ssl http2; # managed by Certbot

Finally, refresh nginx, and you’re all done:

sudo service nginx reload


Congratulations! You’ve just deployed a Dart application to the Web! In addition, you’ve set up a firewall, added crash protection to your backend, set up nginx to both serve files and proxy to your application, and also enabled both HTTPS and HTTP/2. Give yourself a hand.