Secciones

Bitbucket Pipelines with Digital Ocean

Inicio » Artículos » Bitbucket Pipelines with Digital Ocean
La categoría del artículo es
Escrito el 22 March 2020, 05:58


This article explains how to use Bitbucket pipelines to deploy new versions of a fat jar into a Digital Ocean droplet. This is the way I currently deploy new versions of my Alexa skill Score Board. To follow this article you need to have a brief understanding of what Bitbucket Pipelines is and some skills managing Linux systems.


This article explains how to use Bitbucket pipelines to deploy new versions of a fat jar into a Digital Ocean droplet. This is the way I currently deploy new versions of my Alexa skill Score Board. To follow this article you need to have a brief understanding of what Bitbucket Pipelines is and some skills managing Linux systems.

Lets start with a bit of context here. My skill Score Board is a pretty basic Alexa Skill that is able to manage the score of different sports like football, volleyball, tennis, padel, etc just asking Alexa things like:

- Alexa, open Score Board and start a new match of tennis
- Alexa, open Score Board and adds a point to team A
- Alexa, open Score Board and undo the last action

This Alexa skill is hosted as an Amazon lambda function using NodeJS, but of course, all the calls are done to an external API that manages those requests and keeps the results in a data store. This API is currently hosted in a Digital Ocean droplet (a very small and cheap one) running Ubuntu. The API has been developed using Vert.x and it basically exposes some REST endpoints. Pretty simple stuff. The application is built as a fat jar using the ShadowJar gradle plugin and then, the generated fat jar is launched from the command line as a normal jar with:

java -jar build/libs/alexascores-1.0.0-SNAPSHOT-fat.jar

The code of the API is hosted in a Bitbucket repository and the idea was basically to be able to execute automatic deployments every time I push a new commit into master, so Bitbucket Pipelines is the best candidate to help me with that in an scenario like the one I have. Please take a look at Get started with Bitbucket Pipelines to have a better understanding if needed.

In the repository, I had to create a new file called bitbucket-pipelines.yml with the next content:

PHP:
span style="color: #ff0000;">"Deploy step finished"

 

It basically creates the fat jar after running successfully the tests and copy the fat jar using scp into my Digital Ocean droplet. Before this, you will need to create an ssh key at Pipelines > SSH Keys in the Settings area of your repository. Copy the public key to ~/.ssh/authorized_keys on your Digital Ocean droplet.

After the fat jar is copied, we call the shell script named deploy.sh. This file has not been created yet and it will need to be in the root folder of your project as well with the next content:

PHP:
span style="color: #ff0000;">"Deploy script started""Deploy script finished execution"
 

This file changes directory to user´s home and then stops and start the alexascore service I have in my droplet. Simple, isn´t it?

My droplet runs a bit old version of Ubuntu 14.04 and the service needs to create the file /etc/init/alexascores.conf with the next content:

PHP:
author "Fran Garcia"
description "start and stop alexa scores for Ubuntu (upstart)"
version "1.0"

start on started networking
stop on runlevel [!2345]

env APPUSER="user"""
env APPBIN="/usr/bin/java"
env APPARGS="-jar /user/alexascores/alexascores-1.0.0-SNAPSHOT-fat.jar""$APPDIR/$APPBIN $APPARGS"

This is the service that allows us to stop and start the service. I used this article to get this part done: How to set up proper start/stop services

And if I am not wrong, that´s all you need to run automatic deployments using Bitbucket Pipelines against a Digital Ocean droplet running Ubuntu 14.04.


Espero tus comentarios...

Ayuda Textile