Sunday 29 September 2013

Professinal Scrum Master I

Last week, I was able to become Professional Scrum Master. Can you believe that? :)

I knew some things about Scrum for some time now, but I've never attended real Scrum course until about a week ago. The course took 2-days to complete and was conducted by one of the local training centers. After the course I was given password to complete Professional Scrum Master I assessment from Scrum.org.

Except for taking Scrum training, to pass the assessment I believe you clearly need to take "Open assessment" (which is free of charge and which you can find on the same website) as many times as you need to be comfortable with passing it. The real assessment contains 80 questions while you have only 60 minutes to answer. And difficulty of number of questions is significantly higher than questions from "open" assessment. To pass the exam, you need to achieve score of at least 85%. Personally, I thought that I passed comfortably, but it turned out that I got "just" 89%.

After passing it, you need to wait for a few hours before your certificate is ready to be printed from pdf document :) Oh, and you are given the right to use the badge:


Tuesday 10 September 2013

Effective Akka by Jamie Allen; O'Reilly Media

This short book, written by Jamie Allen, contains number of advises for Akka developers. I believe that you should be already familiar with Akka framework before reading the book, because the author assumes that you already know how to use at least the basic features of the framework.

First chapter of the book discusses the approaches to designing actor based applications. It's hard not to agree with the author about the presented ideas, but I think it's something that most Akka developers already know.

Effective Akka's second chapter presents two quite small patterns used in real world applications. I liked the first one, but the second one I consider a tip instead of "pattern" - like Jamie called it. Application of the pattern are presented with unit-tested source code, which is definitely a plus.

Third chapter (the last one!) presents general advises when using Akka, but I feel developers should be familiar with them already as these advises are not much different that general programming / designing rules. The only difference is that here Allen show how they are relevant to building actor based application. You will also find here ideas for creating systems with resilience and high-performance.

In conclusion, I'd like to say that the book seems nice to me. On the other hand, as Akka developer, I'd love to read a book that would push me on two levels higher in building actor systems, and this book left me a little bit disappointed in that regard.


Saturday 17 August 2013

Simple app using Gradle, Spray and Heroku

After posting quite a few times in June, I slowed down and I posted just once (book review) in July-August. One of the reasons for that is that it was quite busy period for me. I thought that I'd like to get back to writting by posting some cool stuff in here.

Recently I've read two books about Gradle build system, I've even reviewed here one of them. I decided to create an application that is built with Gradle, so that I can get more practice with it.

As cloud computing is getting more and more popular, then I thought I will create an app that will be run on Heroku platform. The application will use Spray-Can server for creating scala, actor based web application. The application will offer simple REST application and use spray-json module for json conversions between string and case class representation.

Build setup


In first step we will create a build file (build.gradle) that will tell gradle what dependencies are needed for an application, how to get them and how to create a result package

We define 2 repositories that we will be getting dependencies from. There are few dependencies that include scala library and akka and spray modules. We define here a new task - "stage" that will be run later by heroku. Basically it will just trigger two other Gradle tasks: clean and installApp. The latter will gather dependencies and create a distributable package with a script that will run the app.

Some of you might be wondering what Akka is. Let me just tell you that it is an innovating and exciting framework for building scalable, distributed systems. It is used internally by Spray and will also be used by me in the app.

Now we need to write some Scala code for an actual application.

Creating Spray app


First let's create scala's App.

It just initializes Akka's actor system and creates an single actor in there (actor of class HelloWorldActor). Then there is a binding of this actor to port provided in environmental variable, or 8080 if not provided at all.

Let's now put our focus on the behavior of this actor. We will be creating REST api based on json, so I've created some code for our domain and conversion of it's case class to json (and from json to case class as well if needed).

Now, the behaviour of HelloWorldActor:

In Spray (or Spray-Routing, I should write), you create something called route, which is a set of routing rules. HelloWorldActor is basically just running the route from trait HelloWorld. You usually keep these two things separated as it allows you to unit test is more easily.

Route defined in HelloWorld specifies that whenever there's a request for path api/persons/X, convert X to an integer and run a closure that returns json object with my name :). As you see in the snippet above, to get json representation of case class I can just use .toJson method on Person - there is similar method to get case class from a string.

The app should be runnable by now. You can just "gradle run" and check the result in http://localhost:8080/api/persons/5

Deploying to Heroku


Now that we have a runnable application in place, we can think of running the app in the cloud. To run this app on Heroku, we need to provide a special file: Procfile. It will tell Heroku how to actually run the web application.

The Procfile can contain just a single line:

As you see, Heroku will just invoke the script that was created by the installApp task of Gradle's application plugin.

As last point, I'd like to tell you that Heroku's official build-pack (set of scripts that build the app) for Gradle based application is a bit outdated and most probably this application cannot be run straight away.

But I've already forked Heroku's gradle build-pack repository on Github and updated it to fetch newest Gradle version (1.7). You can freely use it by setting an enviromental variable using heroku console:

Conclusion


That's basically it! Our REST API should already work on heroku. I've created repository on Github with the code of that application with even some more additions. As you see, creating app based on heroku,spray and gradle was pretty quick and easy.

Writing this post was fun and I look forward to posting again. I think that next time I might write about running web application on Raspberry Pi... :)

Saturday 3 August 2013

Gradle Beyond the Basics by Tim Berglund, O'Reilly Media

It is a quite short book (only 4 chapters) that presents you some more advanced topics of Gradle. Tim Berglund cover here topics such as: file copying & processing tasks, building custom plugins, using hooks to life-cycle events and management of dependencies.

I enjoyed the book. It is easy to read as the authors show many snippets of code as an example for the topics he covers. And because of the relatively short length of the book, you don't need to spend a lot of time reading about details that you probably don't care about.

The book is definitely not for those who are new to Gradle build system. I believe that you should at least be familiar with topics covered in previously published "Building and Testing with Gradle". The authors assumed that the reader already know how to use Gradle and quickly started with describing another features of it.

On the other hand, if you already know how to use it then you might not need to read this book at all. If in your job you need to use more advanced gradle tools, you might as well use only official online documentation, which most probably already covers all the topics from the book. The only benefit you will have from reading Tim's book is that he covered some example step-by-step instructions for using these advanced features, like creating custom plugins.

Sunday 30 June 2013

Hello world with Vagrant

Hello everyone!

Not long ago I wrote a review of Vagrant: Up and Running. This time, I'd like to post some tutorial on using Vagrant for those, who are totally new to it. Let me just remind you that Vagrant is a useful tool for managing virtual machines and their settings of resources used, network and others. In most cases, people use to create VMs for VirtualBox, but there is also possibility to set up Amazon EC2 machines directly from Vagrant.

Setting up


After installing Vagrant (and VirtualBox), to create new virtual machine, you just need to type in your terminal:

This means creating new settings for a VM that will be based on ubuntu 12.04. In your current directory new file will be created: Vagrantfile. It's a text file with all the settings of your VM. Actually, it contains just a Ruby source code, but don't think you need to know Ruby to use Vagrant efficiently.

Initially created Vagrantfile contains some default settings and huge amount of settings that are commented out, just to give you some idea about what else can be configured here. But for know let's just keep the defaults.

Starting the machine


To start your newly defined virtual machine you just need to type:


If it's the first time you run it, the VM will be created. If base image for a vm (clean ubuntu) is needed then it will be downloaded automatically from url specified in "init" command.

To actually use the machine you need to use ssh:


Finishing work


After you finished your work, you would usually stop it by:


Or you can destroy the whole machine, so that all the resources are freed (including hard disk space).


More resources


If you need high-performance on the virtual machine, then you probably need to adjust the resources it is using. When using Vagrant with VirtualBox you need to add some additional settings in Vagrantfile. These settings are specified in format of VBox's "modifyvm" command:


Sharing a folder


When you work on guest virtual mashine, there is often need for sharing a folder between host and guest operating system. Nice thing in Vagrant is that you can just specify a single configuration file to set up this shared folder (and mount it on guest OS).


After next "vagrant up", you will be able to use the folder on guest.

Port forwarding


When you run some kind of a (web?) service on guest, you need to somehow be able to connect to it. In Vagrant, it is just another one-liner!


This line specifies that when you try connect from host to "localhost:9090", you will actually connect to the guest machine on port 9000. It's as simple as that. This way you can easily test web application running on guest, using your web browser from host.

Additional software - provisioning


Managing of software inside a virtual machine is called provisioning. There are few mechanisms available in Vagrant for this job. Here I will describe only the most basic one: shell provisioning.

Shell provisioning is just a set of shell "tasks" to be executed after machine boot. You can write exactly shell commands in Vagrant file, or point to a shell script that should be executed.

In clean ubuntu you should start with running "apt-get -f install", just to be able to install additional software using apt-get package managing tool.

To do it after each machine boot, just put following line in your Vagrantfile:


If you want to run a script, you can specify path to it, ex:


I assume that this script has instructions for installing git in the system. I'd like to point out that when using Vagrant, you should use option "apt-get install -y {package_name}", which makes apt-get assume that you answer positively on any "y/N" question.

You might wonder, what you can do to make some scripts run only once (on first boot), rather than on every "vagrant up". The simple trick is to make inside a script an if statement for presence of some file (let's say you keep logs of installing git in it):


Then only the first time you start the vm the script will be run. If you want to run it again, you need to delete particular file.

FAQ


Any GUI? If you set a flag: "vb.gui = false" inside a virtualbox configuration (in Vagrantfile), you will have your GUI. But you would probably also need to install packages like Gnome to make real use of that.

More provisioning techniques? There are also ways to use provisioning tools like Puppet of Chef, but I'm not an expert on those, so you need to find something about it on your own:)

Custom base images? No problem, just look on "how to create your own box" in Vagrant documentation.

More network settings? You can do a lot - actually anything (I think) that is possible with typical VirtualBox.

More machines defined inside a Vagrantfile? Yes, that's doable. Normally you use only a single Vagrantfile per project, even if you need more virtual machines.

Examples


You can find an example definition of a Vagrant environment on my github. There is a single-machine definition of vm for scala+mongodb+play (actually +sbt) development.

Note: It's common for Vagrant users to run developed code on VM, but edit the code in your favourite IDE on host OS.

Monday 24 June 2013

Graph Databases by Ian Robinson, Jim Webber, Emil Eifrem; O'Reilly Media

This book significantly help in understanding what graph databases are and how to use them properly. The authors introduce basic ideas behind graph databases. They write about why the need for such databases emerged, why there's a need for having database engine in which relationships are first class citizens. 

I believe that most important chapter of this book is the one that explains data modelling with graphs. The way you need to think when using graph db is totally different that in other types of db. The authors based their teachings on a set of examples, with each being discussed in detail. Various use-cases are shown, and you'll be surprised how efficient data model can be, when used properly.

You will be also able to learn basics of Cypher, which is a language that is used for querying a graph database. It's not really comprehensive introduction, so therefore it cannot be used as a reference. The book shows examples for querying Neo4j, which is probably the most popular graph database implementation. I don't think that you will be very comfortable at using Neo4j immediately after reading this book. It rather intends to make you familiar with fundamental concepts of graph databases and showing how it differs from still more popular solutions like RDBMS. 

Also, some additional topics were covered, like: overview of using graph database in agile (also tdd-based) manner, introduction to Neo4j internals (different available APIs or ways of running it) or overview of other NoSQL storage.

I really liked reading it and the book made me more interested in graph-dbs as it provided solid arguments for using it in various applications. On the other hand, after reading it, I still think there's a lot for me to learn (from other resources) before I become comfortable with Neo4j. I would recommend this book to all developers, who are new to concepts of graph databases and who wants to become familiar with its strong points, before they try start using concrete graph database solutions like Neo4j.

Graph Databases - O'Reilly Product Page


Thursday 20 June 2013

Tricity JUG: Apache Lucene in practice

On June, 15th I was one of the participants of the workshop titled "Apache Lucene in practice". The meeting was organized by Tricity Java User Group and was lead by Dominika Puzio and Patryk Makuch, whose previous experience included building search engines for many systems that belongs to Wirtualna Polska. 

Apache Lucene is a high-performace text search engine written entirely in Java. That's the why workshop included developing simple java web application for searching through wikipedia content. The actual wikipedia's contents was already downloaded earlier by organizers and distributed before the event.

The project that was developed during the meeting is pushed to github you can access it through my fork of it. As a participant, I didn't need to spend a lot of time on writting same code as presenters. What I was really doing was just git checkout "next commit" after each step done, and I've only experimented sometimes with the given code. This way I could just listen carefully to the presenters most of the time, as they explained each step with many details. The presenters showed that they have a lot of experience in that field and shared a lot from it.

Should I mention that organizers ordered pizzas for participants? :)

I really enjoyed the workshops. I believe that I've learned a lot and I think that I'm now fully capable of building Lucene-based search engine on my own :). I'm glad that I could participate in the event and I'm looking forward to next TJUG's meetings.