May 16, 2022

Export your assets with this simple gcloud command

Manage Google Cloud environments is often a complex task. You need to take care of the resource provisioning and keep all services up and running, but also may need understand service usage and by proxy their costs, who is using your resources and what projects have certain APIs active. This escalates quickly as your company adopts more and more Cloud services, and you end up with a very high risk to have some shadow IT in the cloud too.

One of the tasks you may need to do is to visualize all your Cloud Assets and, eventually, answer some specific questions on them, like "which project has a specific API enabled?". An asset in this context is any resource you have created using any of the APIs, in any of your organization projects.

Cloud Asset Inventory is available from the IAM > Asset Inventory page. From there, you can see all your cloud resources at a glance using a pretty geographic visualization:

This visualization is great for quickly browsing and or searching for specific items using the UI filters. For instance, you can check all your Cloud Storage buckets filtering them on the left pane:

Doing this in the UI is cool and all but then you may need to run some more complex filtering and we may need to use a different approach. One of the cool things that I find on Google Cloud is that the almost all tasks you can do on the Cloud Console, usually can also be done both via REST APIs and via the gcloud  command line interface.

Let's say you need to export all the assets you have in Google Cloud for a spreadsheet to do a report to management. First, let's grab our Organization numeric identifier. This can be done using the Cloud Shell (a free VM Google gives you integrated in the Cloud Console!):

gcloud organizations list

This will print out some info, and we need to grab one of them for later use. Let's store it in an environment variable to use in the next commands:

export ORG_ID="$(gcloud organizations list --format='value(name)')"

Now, let's export all resources underneath our organization with the Asset Inventory sub-command:

gcloud asset list \
    --organization="$ORG_ID" \
    --content-type=resource \

This will print to the standard output of the terminal all your assets in JSON format. Now, you can already use the data for, say, do some shell-script foo like grep commands to find specific items. But let's go a bit further with the gcloud command options.

First, you may need to export specific types of assets. This can be done using the --asset-types parameter. You pass in a coma separated list of asset types, like to export only VM Instances, or for Storage Buckets. Let's combine both to see all VMs and Buckets in our organization:

gcloud asset list \
    --organization="$ORG_ID" \
    --content-type=resource \
    --asset-types=',' \

This filtered the items but still shows up all the JSON attributes. Let's trim it a bit by projecting only a few attributes. We will export:

  • The asset type
  • The resource API endpoint
  • The resource location
  • The last update time

gcloud asset list \
    --organization="$ORG_ID" \
    --content-type=resource \
    --asset-types=',' \
    --format='csv(assetType,,, updateTime)'

Here we used the --format=csv() to project the desired columns and print that into a convenient format to import somewhere else, like in a Google Sheet, for reporting purposes. Let's wrap it up directing the command output into a file:

gcloud asset list \
    --organization="$ORG_ID" \
    --content-type=resource \
    --asset-types=',' \
    --format='csv(assetType, resource.location, updateTime,' \
    > assets.csv

Now, if you executed the command in Cloud Shell, you can use the Explorer to right click on the file and download it, then upload it to a spreadsheet:

What we discussed here for the Asset Inventory works also for other gcloud sub-commands. Check out this page to learn more about all the nice things you can do with it!

Happy Hacking!

August 31, 2015

Migrating a Bitbucket Mercurial project to Git

I am a long-time Mercurial user. It was my first choice after the switch from Subversion to the DVCS world. The decision I took by that time was based on how simple Mercurial is. From the design of the filesystem, the friendly and simple command line options, to the fact that with a single instalation, I was able to have hg web serving the central repositories to the team.

The time passed and I got used to Mercurial a lot. But looking around, seems that this is not the case for everyone else. Given the popularity of Github as a coding social network, the majority of services and tools has Git/Github support, if not both. Furthermore, Git has matured a lot, is very performant, and if the entire Linux Kernel uses it, why should I use anything else?

After resisting a lot for the change, I finally settle that it was time to finally make it. Now I am happy with the switch, and slowling I am moving my projects, specially the Open Source ones, from Hg to Git. After using Git for a while, the brain muscle memory toke care of the confusing CLI options and switches. I still land on StackOverflow to find some switches, tought.

Despite my change from Mercurial to Git, I'm probably going to stick with Bitbucket for some projects, specially the private ones and the ones at my corporation. The current pricing model of Bitbucket that charges for team members as opposed to repositories fits better in my monthly budget.

In this post, we will discuss a few ways to move from Mercurial to Git on Bitbucket, migrating the whole project, not only the repository.

Rename the old repo and create a new one

The first step is to backup your Mercurial repository, by renaming it. It is wise to also remove any WRITE permissions to avoid people trying to push to the wrong repository during the migration if it is a shared one.

To rename your repository go the project Settings page, then change the "Name" field to something different. In our case, I'll add the .hg suffix, so we can keep the same name for the new Git repository. Once renamed, you can then reuse the old name to create a new Git project.

Convert from Mercurial to Git

Now that the project has was renamed, let's convert the project commit history into a new, Git project. The best way to accomplish that is using the hg-git extension. This extension makes a great deal to interact with Git repositories from Mercurial itself, allowing you to simply push to the Git new path from a local copy.

To install the extension, follow the instructions from their website. If you are using Linux and have easy_install on your machine, it should be as simple as:

$ easy_install --user hg-git

The --user flag tells easy_install to keep things on your home folder, making it easier to remove the extension later if you want to.

After you have the extension installed, enable it in your ~/.hgrc file:


Now, it is time to push the changes to the new repository. In order to avoid losing any commits, let's push from a clean, bare Mercurial repo. The process is as simple as cloning, creating a bookmark named master for the default branch, then pushing to the git repository:
$ hg clone ssh:// && cd repository.hg
$ hg bookmark -r default master
$ hg push git+ssh://

The .hg suffix on the first command is required only if you renamed the repository that way. On the other hand, the .git suffix on the Git repository name is mandatory so hg-git can make the push. The commands bellow assume that you have added your SSH key to the Bitbucket account.

Note: we need to create a bookmark to the default version called master. This allows the hg-git extension to also create the Git master branch. If you don't do that step, you will end up with a repository that display no branches or commits on the Bitbucket site.

The push URL git+ssh will convert the project using the hg-git extension first, then upload the contents to the Git repository specified. All changesets will be converted. It is important to notice that they will also all be recreated, generating new hashes based on the Git checksum.

Migrate Wiki

Bitbucket wikis are also repository tied to your project. However, the project wiki, once enabled, already has an initial commit. Because of that, there is no easy way to migrate the wiki history here, since hg-git will refuse to push to the new wiki repo since that wiki has a different ancestor. The easier way to migrate the wiki is to clone both the old wiki and the new one, and then copy the contents from the old folder to the new folder. Then you can push the changes to the new project wiki.

$ hg clone ssh:// repository.hg-wiki
$ git clone repository.git-wiki
$ copy -r repository.hg-wiki repository.git-wiki
$ rm -r repository.git-wiki/.hg
$ cd repository.git-wiki && git push

Migrate Issues

If you also use the issue tracker, Bitbucket offers a nice export/import wizard to move them to the new repository. In the source project Settings page, go to Import & Export in the Issues menu. Then you can export the issues to a zip file, that you can later import to the new repository.

Note here that changeset references in the issue tracker will all be broken: since the repository was converted, the changesets references inside issue comments now point to a broken link. One way to fix this is to parse the issue file format, that is a JSON array of issue objects, and rewrite the changesets within it. In my case, I didn't bother having the dead links for now, but I'll update this post if I ever find a good way to keep the references later.


With the three steps bellow, you can convert the repository from Mercurial to Git, keeping the majority of the data on the new repository. The caveats are that some issue comment links, as well as the changeset ids, will be different on the new repository.

Happy Hacking!

April 17, 2015

Backup to cloud when there is no space left on device

Imagine that you are planing to move some data from one server to another. To do so, you backup your data by creating a simple compressed tar archive with:

$ tar cjf /tmp/backup.tar.bz2 $HOME

And, for your surprise you end up with a no space left on device error from tar. You need to backup and have no more space: what do you do? Simple, use some Unix Pipes to the job!


In my case, my source box already had the Google Cloud SDK configured, and a good network connection to upload my backup. Instead of doublig the space locally and uploading later, what I did was to pipe the backup generated from tar directly to gsutil:

$ tar cjf - . | gsutil cp - gs://mybucket/backup.tar.bz2

Piping is, basically, connecting the standard output of a process into the standard input of another one. This is, simply put, a way to take the output generated from tar and sending it directly to gsutil, which in turn will do a stream upload to the cloud.

The same piping technique can be used with other tools as well. If you are copying from one server to another via SSH, you can also pipe it over with:

$ tar cjf - . | ssh user@backupserver 'cat > backup.tar.gz'

You can also use the similar technique to restore without the need to download everything.

Happy hacking!


April 5, 2015

Setting up a Moodle instance with Compute Engine and Cloud SQL

Things are getting cloudy these days. And that's good! We have a lot of options available now, ranging from bare virtual machines, to fully managed platforms, and with all kinds of containerized environments in between.

In this blog post, we are going to explore the Google Cloud Platform to host a Moodle site, using a distributed setup that's easy to build and scale as we need to upgrade. The setup we are going to build looks like this:
  • One Compute Engine Instance, where we will install Moodle software
  • One Persistent Disk attached to the instance in Read/Write mode, that we will use to store Moodle Data
  • One Cloud SQL Instance, to store the Moodle Database
This setup has some advantages over putting all into a single instance. First, it uses more than one node to host the system, distributing the load between the web-server and the database. This improves performance, and allows you to scale vertically, either by increasing the Cloud SQL instance tier, or by upgrading your VM to a more performant one. Secondly, we split software from data using both a separated database and a separated persistent disk. This aproach makes sofware updates easy to manage because the software is isolated from the data disk. Making backups with this setup is also very easy: we can enable the Cloud SQL backups to keep up to 7 backups, and also create a very simple script to backup our Moodle Data regularly using Diferential Snapshots on the Persistent Disk.

Getting Started

The first thing to do is to sign up to Google Cloud Platform, using a Google Account. You can use the USD $300 free-trial credit to get started! The free trial credit is valid for 60 days, and it is only valid for new accounts.

Sign up at in order to use the free-trial, or to use the Compute Engine virtual machines, you need to setup a valid payment method, and may need to put in your credit card. To use the free-trial, you will create a new project; if you already use Cloud Platform, I also recomend you to create a new project to host only your Moodle system, unless you plan to share other resources with it.

With your account setup, you can start by creating a new Compute Engine VM. Compute Engine is billed by the minute, after the first 15 minutes, making it easy to setup, stop then resume the work if you need to. Let's begin with a g1-small (1 vCPU, 1.7 GB memory) instance, that can handle a small to moderated traffic very well, and is cost effective.

Launch and configure the virtual machine

To launch the instance from the Developers Console, click on your project, then navigate to Compute -> Compute Engine -> VM Instances. In the Instances panel, click on the New Instance button.

In the new instance screen, let's create our instance with an extra disk to store Moodle data. To do so, click on the Show advanced options link, and use the following values:
  • Instance name: moodle-server
  • Firewall: Allow HTTP traffic
  • Zone: choose one that apply for your location
  • Machine Type: g1-small (you can choose a higher one if you expect more traffic)
  • Boot Disk: new from image, and Debian wheezy backports as boot disk image option
  • Under additional disks, create a new one by clicking on the plus sign, and choose create a new disk.
    • Disk name: moodle-data
    • Disk type: regular persistent disk should go just fine, but if you expect a high load, I suggest SSD as they are more performant
    • Source type: none
    • Size: 200G. You can choose lower/higher values here, but keep in mind that this will be used by Moodle data, so it requires a reasonable ammount of space, and that the higher your disk, more performance it has
    • Click on create (the disk is created immediately)
  • In networking, I recommend you to reserve a static IP: just click on New static IP and give it a name (moodle-ip). The IP is reserved as you click on this option
  • For the authentication options, I suggest you choosing Read/Write to Compute and Storage, and Enable to Cloud SQL. This allows you to run scripts to backup data from within your instance
  • Finally, click on the Create button
Your instance will be started, and we are going to be able to work with it as soon as it boots up. This process is usually very fast, and in one or two minutes the instance will show up in the VM Instances screen. To manage the instance, we need to login using SSH. You can use the SSH button from the Instances page:
The web-based shell will open in a new window, and a new user account will be added to your instance. Note that the web-based shell can get disconnected and to avoid loosing your work or corrupting your instance, let's use a terminal multiplexer that will keep the shell running if the SSH session is droped. I suggest tmux, as it is very easy and straghtforward to install/run:

sudo bash
apt-get update && apt-get upgrade --yes
apt-get install tmux --yes

Once running, tmux will show a green bar at the bottom of the screen:

To leave tmux, use CTRL+D, to detach it and keep it running, or exit to close the shell session. If your connection is dropped, reopen the ssh window and reattach to tmux using:

sudo tmux attach-session -t 0

Format and mount the disk

Remember that we have attached a empty disk to the VM? Since it is empty, we need to format it with a filesystem, and mount it on a directory. To do so, we use a program shipped by Google called safe_format_and_mount, that does just that: format the disk only if it is not formated and mount it for you.

First, let's create a mount point:

mkdir -p /mnt/moodledata

Then let's format the disk (line breaks added for clarity, this is a single command without the back slashes):

/usr/share/google/safe_format_and_mount \
  -m "mkfs.ext4 -F" \
  /dev/disk/by-id/google-moodle-data /mnt/moodledata

Check that the device is mounted listing the contets:

ls -l /mnt/moodledata

You should see a lost+found entry, for an empty Ext4 filesystem. To make this mount point be remounted during boot, add one entry to the /etc/fstab file:

cat >> /etc/fstab
/dev/disk/by-id/google-moodle-data /mnt/moodledata defaults 1 1

Important! Use double grather than signs (>>), otherwise you will loose the root mount point from the fstab file!

Launch and configure the Database

Now, let's launch the database using Cloud SQL. To do that, navigate to Storage -> Cloud SQL menu, then click on the New instance button:

Fill the form with the appropriate values:
  • Instance name: moodle-db
  • Region: Choose the same geographic region of you Compute Engine VM
  • Choose the D0 tier. You can safely start with D0, and upgrade later if you see high usage on the database
  • Click on the Show advanced options link
  • Choose the MySQL 5.5 version
  • Choose a billing plan. In this case, as Moodle will constantly talk to your database, and since you will need an static IP address, the per use plan may not make much difference. I recommend choosing the daily package plan
  • Select the same region of you compute engine VM to the location option
  • Let Enable backups option checked and choose a more suitable time frame for backups; this can be changed later
  • Check the assign an IPv4 address to the instance. Altought a free IPv6 address is available, we can't use this one from within Compute Engine yet
  • Under allowed networks, click on the + sign, and put the IP address of your compute engine VM; you can put the same name of your vm: moodle-server
  • Click on the Create button to launch the instance
The server instance starts very fast. We need to create a new MySQL user and a new database for Moodle. To do so, click on the instance name in the list, and then on the Databases tab, then on the New database button:

In the new database form:
  • Database name: moodle
  • Character set: utf-8
  • Collation: default collation
  • Click on the Add button.
To create the user, click the Access Control tab, then on the Users tab, and finally click on the New User button.

In the new user form:
  • Username: moodle
  • Password: create a strong password, and take note of it to use on the Moodle setup
  • Click on the Add button
Now we are ready to install and configure our Moodle server! Let's get back to our compute engine VM terminal window.

Installing and configuring Moodle software

Moodle requires a web server and a few PHP modules. You can choose the webserver you know best, and for this tutorial I'll use the Apache webserver. Since this is a Debian server, let's use apt to download and install everything we can, so we benefit from the Debian Security upgrades:

apt-get install apache2 php5 php5-gd php5-mysql php5-curl php5-xmlrpc php5-intl

In order to get the Moodle software, let's choose the latest stable version from the download site: Click on the Download tgz button, and dismiss the automatic download window: we need the direct download link. Right click on the click here for manual download link and copy the link address:

On the terminal window, we need to download into our instance using curl. We can type curl and then paste the copied link with CTRL+V, then finish the command with > moodle.tar.gz, resulting in a command like this one:

curl > moodle.tar.gz

Important: do not forget the "> moodle.tar.gz" part of the command at the end, otherwise your file will be printed on the screen! Before unpacking the moodle software, remove the default index.html page from Apache:

rm /var/www/index.html

Then, use the tar command to unpack the software in the Apache directory:

tar xf moodle.tar.gz -C /var/www --strip-components=1

The tar command bellow extracts the file contents into /var/www, skipping the creation of the moodle sub-directory. Now we need to configure Moodle to use all the resources we launched.

Moodle will need read/write permissions on the software and data directories. To fix permissions on these places, use the chown command:

chwon www-data:www-data -R /var/www
chwon www-data:www-data /mnt/moodledata

To configure Moodle, launch the setup wizard by navigating to your instance IP, or by clicking on the compute engine VM IP link on the VM instances page:
You should see the Moodle setup page:

Select yor language, then click on the Next button. Moodle requires you to set some important options. The first one is the visible website address; let's keep the instance IP for now, and we change this value later when the setup is finished. You also need to tell Moodle the software directory and the data directory. The software directory is where we extracted Moodle, /var/www, and the data directory will be the mount point we have created with the 200G persistent disk: /mnt/moodledata. Put these values and click next.

In the next page, we need to setup the database connection. First, select the mysqli database driver, then click next. In the database connection settings fill the values:
  • Database host: the IP address of your Cloud SQL instance
  • Database name: moodle
  • Database username: moodle
  • Database password: the password you choose previously
  • Tables prefix: optional, just use the default
  • Leave empty database port and unix socket, as we are going to use the defaults
When you click next, Moodle will connect to your instance, and it will start in response for the first connection. Also, Moodle will present the software license; read it, and if you agree, click Continue to install.

In the next screen, if some PHP modules that are required, they will be listed in red. You can install them and hit reload to let the installer check again. If all is OK, just click on the Continue button to proceed with the installation.

The next step will take a reasonable time: Moodle installer will configure all database tables, populate some initial data, prepare the moodledata directory with some cache values, and get all things in place. Wait until the page finish loading, and you see a Continue button.

After all setup is completed, you can create the Admin user. The admin user is the one with access to all setup pages, and you should choose a strong password for it. After you create the admin user, you will be propted to put some basic site information. Fill the form and click Save changes.

Now you should see your Moodle home page, and you can start playing around, creating users and courses, installing plugins and so forth.

Enable the cron job

Moodle itself requires you to configure a job that runs periodically, called a cron job. The easier way to setup a Moodle cron job is by running it using curl. Let's enable the moodle cron with a simple shell script and the Debian run-parts directory for cron jobs. First, create a simple shell script that fetches the cron url, and logs the output for debugging:

cat > /etc/cron.hourly/moodle-cron
curl http://localhost/admin/cron.php 2>&1 > /var/log/moodle-cron.log

To finish the file, type CTRL+D and check the contents with the cat command:

cat /etc/cron.hourly/moodle-cron

Now, let's make it executable, and run a test cron:

chmod +x /etc/cron.hourly/moodle-cron

The cron job will run and you can check it's output at the log file.

Sending e-mail

Compute Engine VMs are not allowed to send e-mail. However, you can configure Moodle to use an SMTP server hosted on another service. Google partned with Sendgrid to deliver your compute engine emails, so you can follow this page to signup to a free account that can send 25000 emails from your Google projects. Once signed up, grab the Sendgrid credentials and configure the hostname, port 2525, username and password using the Moodle e-mail setup instructions. Alternatively, you can setup Mandrill, that also has a free quota for emails per month, and provides an SMTP server connection just like Sendgrid.


Some improvements that can be done:
  • Add another daily cron script that backup the moodle-data disk, as well as the moodle-server disk (the boot disk). See how to take disk snapshots for backups, and take care to make your script remove old backps at the end.
  • Change the site URL in the config.php. Moodle saves some data in the database with the full URL, so you should use the database search and replace tool to change data to the new URL.


In this post, we learned how to setup a simple, efficient and cost-effective Moodle site, that can handle a mid-size Moodle traffic, and provides a good infra-structure that can be vertically upgraded in the future.

Important! You will be billed by the resources used, so if you did this as a test-only, remember to remove the compute engine VM, the persistent disks, and the Cloud SQL instance.

March 31, 2015

Building custom Linux kernel the Debian way

Image by: shokunin
Building a Linux kernel is a nice way to learn some new skills, install a fresh and new version of your Operating System core software, and also make any optimizations you see fit for your machine. However, building a kernel is not a trivial task. From downloading, configuring, building and installing, you may get lost if you don't follow some tips.

In this blog post, we are going to understand some basic steps on how to build a custom kernel from sources, using some tools available in the Debian GNU/Linux operating system. If you use any other Debian derivative, like Ubuntu or Mint, this tutorial should work as well.

Why build a custom Kernel?

First and most important: because you can! Secondly: because it is fun! And, off course, because you may need a newer kernel to support hardware, or may need to rebuild the kernel to enable specific features.

It is also a good experience that allows you to hack into the biggest open source project, see all available configuration options and disable those that you may never use, making a slim and optimized kernel for your machine.

Understanding the process

In Debian and derivatives, all packages are installed by the dpkg tool, or via the apt command-line and it's friends. These ensures that every file installed by your package is tracked by the system, allowing you to properly upgrade or remove it later. If you install something in Debian without using dpkg/apt, then you risk compromising your system and any future upgrades from Debian.

Building a Debian package is a complex task by itself: you need to follow a set of rules in order to do it right. But for some commonly used software, Debian also provides a few tools that help you automate the process of generating a .deb installable file out of the source tarball. This is true for the JDK, as well as for the Kernel.

To build a kernel from sources and generate a Debian packge using the tools available in the repository, you need to install the make-kpkg package:

# apt-get install make-kpkg build-essential

The process to build the kernel is:
  1. Download the sources from
  2. Extract the source tarball: 
    • tar xf linux-3.19.3.tar.xz
  3. Configure the Kernel options enabling/disabling features:
    • cd linux-3.19.3 && make menuconfig
  4. Compile all sources to generate the kernel binary and modules:
    • make-kpkg --revision=~custom1 -j 4 binary_image
This process is simple, except from the configuration options: if you don't know how to configure the kernel, you will get lost with the make menuconfig script. Since configuring the kernel itself is a hard thing to properly understand and master, let's use another script to to that in a more automated way. Instead of make menuconfig, use: make olddefconfig. This will take the current running kernel options as a base, and update it to match the new kernel options, using the default values for new options.

Helper script

This process can be even further automated using a script I wrote called kernelbuild. The script is available under the set of scripts I host at the tools repository on my Bitbucket account. To build the newest stable kernel version, all you need to do is:

$ kernelbuild --fast --safe-config

That's it. The script downloads the latest stable kernel from, unpacks the source in the current directory, runs a simple option to automatically configure the kernel based on your current running one and uses auomated defaults for newer build options, and then compiles the kernel using the make-kpkg tool. This process will generate all .deb files for you: the kernel image, a linux-headers package and a package with debug symbols. Super easy!

How the script works

The script uses wget to download the kernel package for you. It parses the HTML output to find the latest stable version, and downloads that version tarball for you. You can avoid autodetecting the kernel version to download a specific one with the -k switch.

Once downloaded, the script unpacks the source tarball. After that step, the script runs the make olddefconfig kernel configuration script. Altought simple, this can be odd if you are using a very different kernel release than the one you are building. You may be prompted to set some options by hand, and if unsure, just press ENTER to use the default value.

The next step is to build the kernel, and the script passes a few parameters to make-kpkg, such as a custom version tag (~custom1). This version number uses a tilde, so your kernel package may be upgraded by the oficial Debian kernel release once it is available in the repositories. The script also uses some other build options, like the -j flag that enables parallell build and use all CPU cores available.

Installing your new Kernel

To install your debian kernel, all you need to do is to use the dpkg tool. Use ls to see the deb files built, and install the image and header packages. If you built the 3.19.3 kernel version, to install use:

# dpkg -i linux-image-3.19.3_3.19.3~custom1_amd64.deb
# dpkg -i linux-headers-3.19.3_3.19.3~custom1_amd64.deb

This installs the packages built by the script, and you can then reboot your machine to use the new kernel. The make-kpkg tool builds a package that contains all triggers that will update/generate the initrd file (necessary to boot), and appropriate Grub menu entries.


Building the Linux Kernel is a very interesting task, that can help you better understand how an operating system works and learn more about the options available to optimize and customize your machine software.

In this post, we saw some tips to build a Kernel suitable to use on a Debian-based distribution, and how you can make use of a helper script to generate a Debian package that integrates nicelly with the operating system software.

Disclaimer: I have tested the scripts and commands provided in this post on multiple machines without issues. The source code is provided as is, without any warranties, and I am not resposible for any damange that you cause to your software or hardware when using it. Use at your own risk.

Happy hacking!