August 31, 2015

Migrating a Bitbucket Mercurial project to Git

I am a long-time Mercurial user. It was my first choice after the switch from Subversion to the DVCS world. The decision I took by that time was based on how simple Mercurial is. From the design of the filesystem, the friendly and simple command line options, to the fact that with a single instalation, I was able to have hg web serving the central repositories to the team.

The time passed and I got used to Mercurial a lot. But looking around, seems that this is not the case for everyone else. Given the popularity of Github as a coding social network, the majority of services and tools has Git/Github support, if not both. Furthermore, Git has matured a lot, is very performant, and if the entire Linux Kernel uses it, why should I use anything else?

After resisting a lot for the change, I finally settle that it was time to finally make it. Now I am happy with the switch, and slowling I am moving my projects, specially the Open Source ones, from Hg to Git. After using Git for a while, the brain muscle memory toke care of the confusing CLI options and switches. I still land on StackOverflow to find some switches, tought.

Despite my change from Mercurial to Git, I'm probably going to stick with Bitbucket for some projects, specially the private ones and the ones at my corporation. The current pricing model of Bitbucket that charges for team members as opposed to repositories fits better in my monthly budget.

In this post, we will discuss a few ways to move from Mercurial to Git on Bitbucket, migrating the whole project, not only the repository.

Rename the old repo and create a new one

The first step is to backup your Mercurial repository, by renaming it. It is wise to also remove any WRITE permissions to avoid people trying to push to the wrong repository during the migration if it is a shared one.

To rename your repository go the project Settings page, then change the "Name" field to something different. In our case, I'll add the .hg suffix, so we can keep the same name for the new Git repository:


Once renamed, you can then reuse the old name to create a new Git project:

Convert from Mercurial to Git

Now that the project has was renamed, let's convert the project commit history into a new, Git project. The best way to accomplish that is using the hg-git extension. This extension makes a great deal to interact with Git repositories from Mercurial itself, allowing you to simply push to the Git new path from a local copy.

To install the extension, follow the instructions from their website. If you are using Linux and have easy_install on your machine, it should be as simple as:

$ easy_install --user hg-git

The --user flag tells easy_install to keep things on your home folder, making it easier to remove the extension later if you want to.

After you have the extension installed, enable it in your ~/.hgrc file:

[extensions]
...
hggit=

Now, it is time to push the changes to the new repository. In order to avoid losing any commits, let's push from a clean, bare Mercurial repo. The process is as simple as cloning, creating a bookmark named master for the default branch, then pushing to the git repository:

$ hg clone ssh://hg@bitbucket.org/username/repository.hg && cd repository.hg
$ hg bookmark -r default master
$ hg push git+ssh://git@bitbucket.org/username/repository.git

The .hg suffix on the first command is required only if you renamed the repository that way. On the other hand, the .git suffix on the Git repository name is mandatory so hg-git can make the push. The commands bellow assume that you have added your SSH key to the Bitbucket account.

Note: we need to create a bookmark to the default version called master. This allows the hg-git extension to also create the Git master branch. If you don't do that step, you will end up with a repository that display no branches or commits on the Bitbucket site.

The push URL git+ssh will convert the project using the hg-git extension first, then upload the contents to the Git repository specified. All changesets will be converted. It is important to notice that they will also all be recreated, generating new hashes based on the Git checksum.

Migrate Wiki

Bitbucket wikis are also repository tied to your project. However, the project wiki, once enabled, already has an initial commit. Because of that, there is no easy way to migrate the wiki history here, since hg-git will refuse to push to the new wiki repo since that wiki has a different ancestor. The easier way to migrate the wiki is to clone both the old wiki and the new one, and then copy the contents from the old folder to the new folder. Then you can push the changes to the new project wiki.

$ hg clone ssh://hg@bitbucket.org/username/repository.hg/wiki repository.hg-wiki
$ git clone git@bitbucket.org:username/repository.git/wiki repository.git-wiki
$ copy -r repository.hg-wiki repository.git-wiki
$ rm -r repository.git-wiki/.hg
$ cd repository.git-wiki && git push

Migrate Issues

If you also use the issue tracker, Bitbucket offers a nice export/import wizard to move them to the new repository. In the source project Settings page, go to Import & Export in the Issues menu. Then you can export the issues to a zip file, that you can later import to the new repository.

Note here that changeset references in the issue tracker will all be broken: since the repository was converted, the changesets references inside issue comments now point to a broken link. One way to fix this is to parse the issue file format, that is a JSON array of issue objects, and rewrite the changesets within it. In my case, I didn't bother having the dead links for now, but I'll update this post if I ever find a good way to keep the references later.

Conclusion

With the three steps bellow, you can convert the repository from Mercurial to Git, keeping the majority of the data on the new repository. The caveats are that some issue comment links, as well as the changeset ids, will be different on the new repository.

Happy Hacking!

April 17, 2015

Backup to cloud when there is no space left on device


Imagine that you are planing to move some data from one server to another. To do so, you backup your data by creating a simple compressed tar archive with:

$ tar cjf /tmp/backup.tar.bz2 $HOME

And, for your surprise you end up with a no space left on device error from tar. You need to backup and have no more space: what do you do? Simple, use some Unix Pipes to the job!

gsutil

In my case, my source box already had the Google Cloud SDK configured, and a good network connection to upload my backup. Instead of doublig the space locally and uploading later, what I did was to pipe the backup generated from tar directly to gsutil:

$ tar cjf - . | gsutil cp - gs://mybucket/backup.tar.bz2

Piping is, basically, connecting the standard output of a process into the standard input of another one. This is, simply put, a way to take the output generated from tar and sending it directly to gsutil, which in turn will do a stream upload to the cloud.

The same piping technique can be used with other tools as well. If you are copying from one server to another via SSH, you can also pipe it over with:

$ tar cjf - . | ssh user@backupserver 'cat > backup.tar.gz'

You can also use the similar technique to restore without the need to download everything.

Happy hacking!

--
References
[1] http://www.cyberciti.biz/faq/howto-use-tar-command-through-network-over-ssh-session/
[2] https://cloud.google.com/storage/docs/gsutil/commands/cp#streaming-transfers
[3] http://en.wikipedia.org/wiki/Pipeline_%28Unix%29

April 5, 2015

Setting up a Moodle instance with Compute Engine and Cloud SQL

Things are getting cloudy these days. And that's good! We have a lot of options available now, ranging from bare virtual machines, to fully managed platforms, and with all kinds of containerized environments in between.

In this blog post, we are going to explore the Google Cloud Platform to host a Moodle site, using a distributed setup that's easy to build and scale as we need to upgrade. The setup we are going to build looks like this:
  • One Compute Engine Instance, where we will install Moodle software
  • One Persistent Disk attached to the instance in Read/Write mode, that we will use to store Moodle Data
  • One Cloud SQL Instance, to store the Moodle Database
This setup has some advantages over putting all into a single instance. First, it uses more than one node to host the system, distributing the load between the web-server and the database. This improves performance, and allows you to scale vertically, either by increasing the Cloud SQL instance tier, or by upgrading your VM to a more performant one. Secondly, we split software from data using both a separated database and a separated persistent disk. This aproach makes sofware updates easy to manage because the software is isolated from the data disk. Making backups with this setup is also very easy: we can enable the Cloud SQL backups to keep up to 7 backups, and also create a very simple script to backup our Moodle Data regularly using Diferential Snapshots on the Persistent Disk.

Getting Started

The first thing to do is to sign up to Google Cloud Platform, using a Google Account. You can use the USD $300 free-trial credit to get started! The free trial credit is valid for 60 days, and it is only valid for new accounts.

Sign up at https://cloud.google.com/free-trial/Note: in order to use the free-trial, or to use the Compute Engine virtual machines, you need to setup a valid payment method, and may need to put in your credit card. To use the free-trial, you will create a new project; if you already use Cloud Platform, I also recomend you to create a new project to host only your Moodle system, unless you plan to share other resources with it.

With your account setup, you can start by creating a new Compute Engine VM. Compute Engine is billed by the minute, after the first 15 minutes, making it easy to setup, stop then resume the work if you need to. Let's begin with a g1-small (1 vCPU, 1.7 GB memory) instance, that can handle a small to moderated traffic very well, and is cost effective.

Launch and configure the virtual machine

To launch the instance from the Developers Console, click on your project, then navigate to Compute -> Compute Engine -> VM Instances. In the Instances panel, click on the New Instance button.

In the new instance screen, let's create our instance with an extra disk to store Moodle data. To do so, click on the Show advanced options link, and use the following values:
  • Instance name: moodle-server
  • Firewall: Allow HTTP traffic
  • Zone: choose one that apply for your location
  • Machine Type: g1-small (you can choose a higher one if you expect more traffic)
  • Boot Disk: new from image, and Debian wheezy backports as boot disk image option
  • Under additional disks, create a new one by clicking on the plus sign, and choose create a new disk.
    • Disk name: moodle-data
    • Disk type: regular persistent disk should go just fine, but if you expect a high load, I suggest SSD as they are more performant
    • Source type: none
    • Size: 200G. You can choose lower/higher values here, but keep in mind that this will be used by Moodle data, so it requires a reasonable ammount of space, and that the higher your disk, more performance it has
    • Click on create (the disk is created immediately)
  • In networking, I recommend you to reserve a static IP: just click on New static IP and give it a name (moodle-ip). The IP is reserved as you click on this option
  • For the authentication options, I suggest you choosing Read/Write to Compute and Storage, and Enable to Cloud SQL. This allows you to run scripts to backup data from within your instance
  • Finally, click on the Create button
Your instance will be started, and we are going to be able to work with it as soon as it boots up. This process is usually very fast, and in one or two minutes the instance will show up in the VM Instances screen. To manage the instance, we need to login using SSH. You can use the SSH button from the Instances page:
The web-based shell will open in a new window, and a new user account will be added to your instance. Note that the web-based shell can get disconnected and to avoid loosing your work or corrupting your instance, let's use a terminal multiplexer that will keep the shell running if the SSH session is droped. I suggest tmux, as it is very easy and straghtforward to install/run:

sudo bash
apt-get update && apt-get upgrade --yes
apt-get install tmux --yes
tmux

Once running, tmux will show a green bar at the bottom of the screen:


To leave tmux, use CTRL+D, to detach it and keep it running, or exit to close the shell session. If your connection is dropped, reopen the ssh window and reattach to tmux using:

sudo tmux attach-session -t 0

Format and mount the disk

Remember that we have attached a empty disk to the VM? Since it is empty, we need to format it with a filesystem, and mount it on a directory. To do so, we use a program shipped by Google called safe_format_and_mount, that does just that: format the disk only if it is not formated and mount it for you.

First, let's create a mount point:

mkdir -p /mnt/moodledata

Then let's format the disk (line breaks added for clarity, this is a single command without the back slashes):

/usr/share/google/safe_format_and_mount \
  -m "mkfs.ext4 -F" \
  /dev/disk/by-id/google-moodle-data /mnt/moodledata

Check that the device is mounted listing the contets:

ls -l /mnt/moodledata

You should see a lost+found entry, for an empty Ext4 filesystem. To make this mount point be remounted during boot, add one entry to the /etc/fstab file:

cat >> /etc/fstab
/dev/disk/by-id/google-moodle-data /mnt/moodledata defaults 1 1

Important! Use double grather than signs (>>), otherwise you will loose the root mount point from the fstab file!

Launch and configure the Database

Now, let's launch the database using Cloud SQL. To do that, navigate to Storage -> Cloud SQL menu, then click on the New instance button:



Fill the form with the appropriate values:
  • Instance name: moodle-db
  • Region: Choose the same geographic region of you Compute Engine VM
  • Choose the D0 tier. You can safely start with D0, and upgrade later if you see high usage on the database
  • Click on the Show advanced options link
  • Choose the MySQL 5.5 version
  • Choose a billing plan. In this case, as Moodle will constantly talk to your database, and since you will need an static IP address, the per use plan may not make much difference. I recommend choosing the daily package plan
  • Select the same region of you compute engine VM to the location option
  • Let Enable backups option checked and choose a more suitable time frame for backups; this can be changed later
  • Check the assign an IPv4 address to the instance. Altought a free IPv6 address is available, we can't use this one from within Compute Engine yet
  • Under allowed networks, click on the + sign, and put the IP address of your compute engine VM; you can put the same name of your vm: moodle-server
  • Click on the Create button to launch the instance
The server instance starts very fast. We need to create a new MySQL user and a new database for Moodle. To do so, click on the instance name in the list, and then on the Databases tab, then on the New database button:

In the new database form:
  • Database name: moodle
  • Character set: utf-8
  • Collation: default collation
  • Click on the Add button.
To create the user, click the Access Control tab, then on the Users tab, and finally click on the New User button.

In the new user form:
  • Username: moodle
  • Password: create a strong password, and take note of it to use on the Moodle setup
  • Click on the Add button
Now we are ready to install and configure our Moodle server! Let's get back to our compute engine VM terminal window.

Installing and configuring Moodle software

Moodle requires a web server and a few PHP modules. You can choose the webserver you know best, and for this tutorial I'll use the Apache webserver. Since this is a Debian server, let's use apt to download and install everything we can, so we benefit from the Debian Security upgrades:

apt-get install apache2 php5 php5-gd php5-mysql php5-curl php5-xmlrpc php5-intl

In order to get the Moodle software, let's choose the latest stable version from the download site: https://download.moodle.org/releases/latest/. Click on the Download tgz button, and dismiss the automatic download window: we need the direct download link. Right click on the click here for manual download link and copy the link address:

On the terminal window, we need to download into our instance using curl. We can type curl and then paste the copied link with CTRL+V, then finish the command with > moodle.tar.gz, resulting in a command like this one:

curl https://download.moodle.org/download.php/direct/stable28/moodle-latest-28.tgz > moodle.tar.gz

Important: do not forget the "> moodle.tar.gz" part of the command at the end, otherwise your file will be printed on the screen! Before unpacking the moodle software, remove the default index.html page from Apache:

rm /var/www/index.html

Then, use the tar command to unpack the software in the Apache directory:

tar xf moodle.tar.gz -C /var/www --strip-components=1

The tar command bellow extracts the file contents into /var/www, skipping the creation of the moodle sub-directory. Now we need to configure Moodle to use all the resources we launched.

Moodle will need read/write permissions on the software and data directories. To fix permissions on these places, use the chown command:

chwon www-data:www-data -R /var/www
chwon www-data:www-data /mnt/moodledata

To configure Moodle, launch the setup wizard by navigating to your instance IP, or by clicking on the compute engine VM IP link on the VM instances page:
You should see the Moodle setup page:

Select yor language, then click on the Next button. Moodle requires you to set some important options. The first one is the visible website address; let's keep the instance IP for now, and we change this value later when the setup is finished. You also need to tell Moodle the software directory and the data directory. The software directory is where we extracted Moodle, /var/www, and the data directory will be the mount point we have created with the 200G persistent disk: /mnt/moodledata. Put these values and click next.


In the next page, we need to setup the database connection. First, select the mysqli database driver, then click next. In the database connection settings fill the values:
  • Database host: the IP address of your Cloud SQL instance
  • Database name: moodle
  • Database username: moodle
  • Database password: the password you choose previously
  • Tables prefix: optional, just use the default
  • Leave empty database port and unix socket, as we are going to use the defaults
When you click next, Moodle will connect to your instance, and it will start in response for the first connection. Also, Moodle will present the software license; read it, and if you agree, click Continue to install.

In the next screen, if some PHP modules that are required, they will be listed in red. You can install them and hit reload to let the installer check again. If all is OK, just click on the Continue button to proceed with the installation.

The next step will take a reasonable time: Moodle installer will configure all database tables, populate some initial data, prepare the moodledata directory with some cache values, and get all things in place. Wait until the page finish loading, and you see a Continue button.

After all setup is completed, you can create the Admin user. The admin user is the one with access to all setup pages, and you should choose a strong password for it. After you create the admin user, you will be propted to put some basic site information. Fill the form and click Save changes.

Now you should see your Moodle home page, and you can start playing around, creating users and courses, installing plugins and so forth.

Enable the cron job

Moodle itself requires you to configure a job that runs periodically, called a cron job. The easier way to setup a Moodle cron job is by running it using curl. Let's enable the moodle cron with a simple shell script and the Debian run-parts directory for cron jobs. First, create a simple shell script that fetches the cron url, and logs the output for debugging:

cat > /etc/cron.hourly/moodle-cron
#!/bin/sh
curl http://localhost/admin/cron.php 2>&1 > /var/log/moodle-cron.log

To finish the file, type CTRL+D and check the contents with the cat command:

cat /etc/cron.hourly/moodle-cron

Now, let's make it executable, and run a test cron:

chmod +x /etc/cron.hourly/moodle-cron
/etc/cron.hourly/moodle-cron

The cron job will run and you can check it's output at the log file.

Sending e-mail

Compute Engine VMs are not allowed to send e-mail. However, you can configure Moodle to use an SMTP server hosted on another service. Google partned with Sendgrid to deliver your compute engine emails, so you can follow this page to signup to a free account that can send 25000 emails from your Google projects. Once signed up, grab the Sendgrid credentials and configure the hostname, port 2525, username and password using the Moodle e-mail setup instructions. Alternatively, you can setup Mandrill, that also has a free quota for emails per month, and provides an SMTP server connection just like Sendgrid.

Improvements

Some improvements that can be done:
  • Add another daily cron script that backup the moodle-data disk, as well as the moodle-server disk (the boot disk). See how to take disk snapshots for backups, and take care to make your script remove old backps at the end.
  • Change the site URL in the config.php. Moodle saves some data in the database with the full URL, so you should use the database search and replace tool to change data to the new URL.

Conclusion

In this post, we learned how to setup a simple, efficient and cost-effective Moodle site, that can handle a mid-size Moodle traffic, and provides a good infra-structure that can be vertically upgraded in the future.

Important! You will be billed by the resources used, so if you did this as a test-only, remember to remove the compute engine VM, the persistent disks, and the Cloud SQL instance.

March 31, 2015

Building custom Linux kernel the Debian way

Image by: shokunin
Building a Linux kernel is a nice way to learn some new skills, install a fresh and new version of your Operating System core software, and also make any optimizations you see fit for your machine. However, building a kernel is not a trivial task. From downloading, configuring, building and installing, you may get lost if you don't follow some tips.

In this blog post, we are going to understand some basic steps on how to build a custom kernel from sources, using some tools available in the Debian GNU/Linux operating system. If you use any other Debian derivative, like Ubuntu or Mint, this tutorial should work as well.

Why build a custom Kernel?

First and most important: because you can! Secondly: because it is fun! And, off course, because you may need a newer kernel to support hardware, or may need to rebuild the kernel to enable specific features.

It is also a good experience that allows you to hack into the biggest open source project, see all available configuration options and disable those that you may never use, making a slim and optimized kernel for your machine.

Understanding the process

In Debian and derivatives, all packages are installed by the dpkg tool, or via the apt command-line and it's friends. These ensures that every file installed by your package is tracked by the system, allowing you to properly upgrade or remove it later. If you install something in Debian without using dpkg/apt, then you risk compromising your system and any future upgrades from Debian.

Building a Debian package is a complex task by itself: you need to follow a set of rules in order to do it right. But for some commonly used software, Debian also provides a few tools that help you automate the process of generating a .deb installable file out of the source tarball. This is true for the JDK, as well as for the Kernel.

To build a kernel from sources and generate a Debian packge using the tools available in the repository, you need to install the make-kpkg package:

# apt-get install make-kpkg build-essential

The process to build the kernel is:
  1. Download the sources from http://www.kernel.org/
  2. Extract the source tarball: 
    • tar xf linux-3.19.3.tar.xz
  3. Configure the Kernel options enabling/disabling features:
    • cd linux-3.19.3 && make menuconfig
  4. Compile all sources to generate the kernel binary and modules:
    • make-kpkg --revision=~custom1 -j 4 binary_image
This process is simple, except from the configuration options: if you don't know how to configure the kernel, you will get lost with the make menuconfig script. Since configuring the kernel itself is a hard thing to properly understand and master, let's use another script to to that in a more automated way. Instead of make menuconfig, use: make olddefconfig. This will take the current running kernel options as a base, and update it to match the new kernel options, using the default values for new options.

Helper script

This process can be even further automated using a script I wrote called kernelbuild. The script is available under the set of scripts I host at the tools repository on my Bitbucket account. To build the newest stable kernel version, all you need to do is:

$ kernelbuild --fast --safe-config

That's it. The script downloads the latest stable kernel from www.kernel.org, unpacks the source in the current directory, runs a simple option to automatically configure the kernel based on your current running one and uses auomated defaults for newer build options, and then compiles the kernel using the make-kpkg tool. This process will generate all .deb files for you: the kernel image, a linux-headers package and a package with debug symbols. Super easy!

How the script works

The script uses wget to download the kernel package for you. It parses the kernel.org HTML output to find the latest stable version, and downloads that version tarball for you. You can avoid autodetecting the kernel version to download a specific one with the -k switch.

Once downloaded, the script unpacks the source tarball. After that step, the script runs the make olddefconfig kernel configuration script. Altought simple, this can be odd if you are using a very different kernel release than the one you are building. You may be prompted to set some options by hand, and if unsure, just press ENTER to use the default value.

The next step is to build the kernel, and the script passes a few parameters to make-kpkg, such as a custom version tag (~custom1). This version number uses a tilde, so your kernel package may be upgraded by the oficial Debian kernel release once it is available in the repositories. The script also uses some other build options, like the -j flag that enables parallell build and use all CPU cores available.

Installing your new Kernel

To install your debian kernel, all you need to do is to use the dpkg tool. Use ls to see the deb files built, and install the image and header packages. If you built the 3.19.3 kernel version, to install use:

# dpkg -i linux-image-3.19.3_3.19.3~custom1_amd64.deb
# dpkg -i linux-headers-3.19.3_3.19.3~custom1_amd64.deb

This installs the packages built by the script, and you can then reboot your machine to use the new kernel. The make-kpkg tool builds a package that contains all triggers that will update/generate the initrd file (necessary to boot), and appropriate Grub menu entries.

Conclusion

Building the Linux Kernel is a very interesting task, that can help you better understand how an operating system works and learn more about the options available to optimize and customize your machine software.

In this post, we saw some tips to build a Kernel suitable to use on a Debian-based distribution, and how you can make use of a helper script to generate a Debian package that integrates nicelly with the operating system software.

Disclaimer: I have tested the scripts and commands provided in this post on multiple machines without issues. The source code is provided as is, without any warranties, and I am not resposible for any damange that you cause to your software or hardware when using it. Use at your own risk.

Happy hacking!

March 25, 2015

Multiple runtimes into single Google App Engine project with Modules

There is no single tool to rule them all. Each Google App Engine runtime has it strengths, but sometimes you can't afford to reinvent the wheel and reimplement something that is ready to use in a library, because of the language you choose for a project in the first place. With the App Engine Modules feature, you can mix and match different programming languages that share the same data in a single Google Cloud Platform project.

In this post, we discuss what the modules feature is and how to leverage it's power with a sample application that mixes Go and Python.

For the impatient, here is the source code https://github.com/ronoaldo/appengine-multiple-runtimes-sample.

What are Modules?

Google Cloud Platform has a top-level organization unit called Project. Each project comes pre-configured with an App Engine application. By creating a project, you have access to a Cloud Datastore database, Memcache, and Task queue, all ready to use.

In the early days of App Engine, your app could have multiple versions, but there was no option to configure backend machines. Once the now deprecated Backends configuration was released, there was no way to deploy multiple versions of them.

With the Modules feature, you can split your application and have a different set of configurations for performance, scaling and version under the same Project, one for each module. This allows you to mix different runtimes, but still share the same state in the same project: all modules have access to the same resources, such as Datastore and Memcache. This allows you to, for instance, create a backend in Java that generates reports with the App Engine Map/Reduce, while you leverage Go in the frontend, also called the default module, that has auto-scaling.

That's just what I want! How to use this stuff?

You can have any setup you want depending on your codebase. The Modules API just requires you to specify the default module, and any other number of non-default modules when running and deploying.

You specify the module name using the module: directive in app.yaml and the <module> XML node in the appengine-web.xml configuration files. A file that does not contain any module directive, means that it is the default module; you can also explicitly set module: default, which I recommend, to avoid mistakes. To create other modules, all you need is to configure their names on separate configuration files. For Java, you can also use the standard project layout of an EAR (Java Enterprise Application) and set the configuration options under each application folder (each module).

My suggested project layout is one that has separate one folder for each module. For instance, in our sample application, we are going to have this layout:

project/
  go-frontend/
    app.yaml
    main.go
  py-backend/
    app.yaml
    main.py

In the go-frontend folder, we put our App Engine app in Go. Then, in the py-backend folder we put our backend logic in Python. To make all this works you have to:

  1. Make sure that go-frontend/app.yaml has the module: default directive.
  2. Make sure that py-backend/app.yaml has the module: py-backend (or any other name you want, except the word default)
  3. Make sure that both app.yaml files have the same application: element value.
  4. Use the gcloud command to install the App Engine SDKs.
With that setup, you can then run the app locally using the command:

$ gcloud preview app run go-backend py-frontend

The gcloud preview app subcommand will then parse all files you have and launch them locally. The default module run on port 8080, and the py-backend on 8081, so you can address them by the port number.

To deploy, you can use:

$ gcloud preview app deploy go-backend py-frontend

With this setup, you have some options:
  1. Each module can live on separated source code repositories, making easy to manage their lifecycle independently from each other.
  2. You can deploy a single module by just putting only one folder on the command line.
Some tips:
  1. Remember that all modules share the same state, but not share code. That said, be careful when changing the datastore. For instance, if you have a struct in Go that saved more properties than you Python db.Model has, you may lose data! In this particular case, it would be nice to use something like a raw db.Expando (Python) or a datastore.PropertyList (Go) to manipulate the data on the non-default module or to always keep your models in sync between the codebases.
  2. When deploying, you may need to convert your app performance settings to modules. That means you no longer can change instance class on the Application Settings, and instead, you need to specify this in the app.yaml / appengine-web.xml files. (See the modules documentation on how to migrate).
  3. This scenario is particularly useful to run App Engine Pipelines or App Engine Map/Reduce jobs using the Python runtime backend while keeping the front-end in Go.
  4. Some settings are global to your appliation like the task queue, cron, and dispatch entry definitions. So keep these files, queue.yaml, cron.yaml, and dispatch.yaml, only on the default module. If you have three queues defined in the go-frontend folder and only one in the py-backend, then if you deploy only the py-backend, only one queue will be on the server, and this can break things, particularly for the cron job definitions.

If I have custom domains, how can I make this work?

You can use the address wildcards to access the backends. Let's say you have a wildcard mapping of you custom domain like this:

*.mydomain.com CNAME glehosted.com

App Engine will then route request as follows:

www.mydomain.com is handled by the default version of go-frontend
- since no module is named www, and no version subdomain was specified, this falls into the default module, default version

default.mydomain.com is handled by the default version of go-frontend
- matches the default module/default version

py-backend.mydomain.com is handled by the default version of py-backend
- matches the py-backend module/default version

beta.py-backend.mydomain.com is handled by the version beta if it exists, of py-backend
- matches the py-backend module/beta version

Any string that does not match a module is handled by the default module, and any string that does not match a version, is handled by the default version of the module. This is the same for *.your-app-id.appspot.com domain that you have for free in your app.

You can also configure some rules that change that behavior using the dispatch.yaml (dispatch.xml) file. This file allows you to specify wildcard URLs and say from which modules they will be served. For instance, you can map */report/* to route requests to any hostname, that have a path starting with /report/, to be served to your Python backend.

Note that, if you dispatch /report/* to the python backend, the python backend handlers will handle requests with that path prefix (/report/), and not at the root path. Therefore, you need to map you WSGI app using that path as a prefix. See more about routing requests to modules in their docs. Check out the sample code dispatch.yaml as well.

Conclusion

With Modules, your App Engine applications can be composed by a more robust infrastructure, allowing you to mix different programming languages and runtimes. If you work with micro services on App Engine, this is a great fit to better take advantage of your app resources using the right language and tools to do the right job.

--
Update 25/03/2015 - Fixed grammar errors, and added clarifications to first paragraphs.

December 30, 2013

Tutorial: How to get started with Apache Cordova and Firefox OS on Nitrous.IO

It's time to go mobile. And it is time to go cloud. Why not go mobile in the cloud? In this tutorial we'll learn how get started building mobile web apps using Apache Cordova (formelly PhoneGap) and testing your app using the Firefox OS Simulator. To avoid some setup complexity and to allow for coding entirelly in the cloud, let's build the app using a Node.js Nitrous.IO box.

1 - Signup for Nitrous.IO and spawn a Node.js box

Nitrous.IO is cloud development environment, that allows you to quick and easy setup a linux box in the cloud with zero-install: just using your browser. If you don't have a Nitrous account yet, you can signup for one on https://www.nitrous.io/, using Google, Github, or by using a username and password.

Once logged into Nitrous, start a new box, and choose the Node.js box template:


The IDE will open after your box is ready, with the following image:
The black screen in the bottom is the Console window, where you will type the commands.

2 - Install Apache Cordova CLI using npm

Apache Cordova is a mobile application container, that allows you to build Web Apps that can be installed on mobile devices, and benefit from the device features, such as Camera or GPS. I recomend reading the documentation to get a grasp on it: at least Overview and Command Line Interface (or the Web Project Dev).

To install cordova on your box, run the command below in the console window:

$ npm -g install cordova

NPM is the Node Package Manager, and the Cordova CLI is distributed as a node module. The above command will download and install the Cordova CLI tools and install them on your Nitrous box, under the ~/.nvm folder. This may take some minutes, and when it the instalation is complete, you can test it with the command

$ cordova info

This command will display some details about how to use cordova, and about your cordova instalation.

3 - Start a new cordova project

With Cordova installed, we are ready to start the project itself. In your console window, run the commands:

$ cd ~/workspace
$ cordova create hello

This will create a project under ~/workspace/hello, and you will be able to see some folders:

$ ls hello/
merges  platforms  plugins  www

The folders merges, platforms and plugins will be empty, but we will fill some of them soon. The merges folder hosts platform specific content overrides. The platforms folder will host the cordova.js JavaScript bindings, plus some configurations and other media. The plugins folder will host additional plugins, as you add them.

The www folder is your application root. Cordova will create a small sample app that responds to the 'deviceready' event in that folder. The index.html file references the app script, style sheet and image assets.


4 - Add the Firefox OS platform

To make your app deployable, Cordova needs some platform specific JavaScript bindings, plus the target platform SDK to build the final app. You can add as many platforms as you wish, with the command cordova platrorm add <platform shortcut>. Some platforms, like ios (from Apple) or wp8 (from Microsoft), require a particular operating system to build and run your app on emulators (see details here). In this tutorial, we'll be using the firefoxos (from Mozilla) platform, since it can be tested entirelly using the web.

To add the Firefox OS platform, run the command, from the hello folder

$ cd ~/workspace/hello
$ cordova platform add firefoxos

After that command finishes, the platforms folder will have a new sub-folder, with the cordova bindings and some sample configurations. We are ready to test our app now!

5 - Test in the Nitrous.IO browser

Cordova can serve your app from a built-in web server, so you can test your preview the app on your browser.

Once we have added a platform, we can run:

$ cordova serve 3000
 Generating config.xml from defaults for platform "firefoxos"
 Preparing firefoxos project
 Static file server running on port 3000 (i.e. http://localhost:3000)
 CTRL + C to shut down

Nitrous allow you to do live testing of servers started from your box. Click on the Preview menu -> Port 3000, and a new browser window will open, displaying the Cordova serve main page:


This page lists some app info, and the platforms and plugins available. Note that the firefoxos platform is a link you can click and, voila! Your app is running!



The same URL can be used to access the app on your mobile phone or tabled, using the device browser, so you can preview how it may look like in the smaller screen.

6 - Install the Firefox OS Simulator

Besides using a tools that allow you to preview the web page on mobile device iframes, you can also use a real device emulator for this task: the Firefox OS Simulator. The device simulator is a Firefox add-on that allows you to test web apps designed to Firefox OS. To install it, launch Firefox (or Iceweasel if you're on Debian), and go to the Firefox OS Simulator add-on page, then click "Add to Firefox" button. The download and setup may take some time, the add-on has more than 60MB.

Once installed, the simulator can be launched from the Firefox (Iceweasel) menu -> Developer Tools -> Firefox OS Simulator.

7 - Run your app in the simulator

The simulator will allow you to add a URL for your app. Copy the link of your app preview and paste it to the Firefox OS Simulator URL bar, then click on the "Add URL" button:

The device simulator will launch, download and install your app for testing. And you will see the app running on the simulator:


The simulator is a pretty nice, and give you an idea on how your app fits on a real device. You can test the device with your mouse, and navigate over the OS and test launching and closing the app.

Conclusion

The web of today is the web accesible from Mobile to 4K TVs. The mobile market is growing fast, and the web is here to help you jump in. Using technologies such as Cordova (PhoneGap) you can reach a broader audience of mobile customers, by using well known web standards. Cloud Computing enables infinite scaling and computational power, helping you build awesome apps that can do amazing things. Nitrous helps you get started on almost everything, by providing a simple and easy to use web interface, with a powerfull GNU/Linux box in the cloud.

References




/happy hacking

November 16, 2013

Coding in the Cloud - Part II - Google Apps Script and Google Sites Review

Continuing the Coding in the Cloud series, today we'll take a look at the possibilities to quick and easy build web sites and smart web applications using some cool tools from Google: Google Sites and Google Apps Script.

Google Sites is a tool to help you quick and easy build a website. You can think of it as a CMS tool, with some goodies: built-in themes, mobile optimization, site search, customization of theme and layout, and a few other nice integrations with other Google Services. You can, for instance, embedd a Google Drive Form into your site with a few mouse clicks.

Apps Script is a JavaScript rutime that runs in the Cloud, and that allows you to extend and automate the Google Apps suite, improving Google Docs, Forms, Spreadsheets and Sites. It has a lot of builtin support for a Google APIs, allowing you to integrate your script with both Google Apps and Google Cloud services.

Development Envionment


With your standard Google Account or with a Google Apps Account, you have access to the Google Drive and, from there, to the Apps Script Editor. From Google Drive, you can create standalone scripts that you can publish either as libraries or web applications. You can also create scripts from containers, like a Spreadsheet, a Document, a Form or a Site.

The Script editor has some nice features. It runs in your browser, has syntax higlight for HTML, JavaScript and CSS, and also features an integrated debugger. From the editor, you can manage the files within your project, that can be Script files (with .gs extension), and HTML files (with .html extension).

Web Apps, Widgets and Web Services

With Apps Script you can build apps, widgets for Sites or Docs, and even simple web services. Your script can handle the HTTP GET and POST methods with callbacks, and serve content like HTML, XML, JSON and many more using the HtmlService and the ContentService modules. You can also build usefull widgets and embedd them into Sites or other Google Drive apps.

Import and Export features

A recently launched feature that is very usefull is the ability to import and export standalone scripts. You can, for instance, create a github project page for a Apps Script library, and manage your source versions from there. Once you have it done, you can then publish it to Google Drive and create a new deployed version from there.

Taking advatage of this resource, a few weeks ago the Google Plugin for Eclipse received an update allowing you to edit your Apps Script code within Eclipse, and sync back to the Cloud.

Easy integration with other Google APIs

With the seamless integration with other APIs, you can build awesome apps. You can use services like Big Query, Google Prediction, Google Cloud SQL, and many other Cloud APIs, appart from nice integration with Google Drive, Gmail and other Google Apps services. All services expose JavaScript libraries to help you get started and use take advantage of a lot of funcionality from those tools.

A few time ago, I flooded my inbox with unwanted, repeated e-mails from a bad mail alert on one of my systems. It was about 5 milion e-mails, impossible even to let Gmail alone to remove. Thanks to Apps Script, I was able to write a simple, batch removal, capable to cleanup my inbox.

Triggers

With container-bound and non-container bound triggers, you can schedule your script to run daily, or in response of an event, like a Form submission. A good use case, is to send e-mail alerts every day, on a simple help-desk like Spreadsheet, reminding users from the open tickets.

Conclusion

Initially designed to be a way to users customize and extend funcionality from Google Apps services, Apps Script is upgrading himself to a very nice, and easy to use tool. Using the Google Sites CMS features with the Apps Script programming in the backend, you can quickly build a web app that range from simple corporative tools to full featured e-commerce web sites. Seamless integration with the Cloud and other Google Services allow your app to be built and deployed entirely in the Google Cloud!

Stay tunned to the next post in the series!