Tropical Software Observations

28 November 2011

Posted by Irregular Zero

at 5:24 PM

0 comments

Labels: , ,

Setting up KVM on Ubuntu 10.04 (Lucid Lynx)

After doing a KVM install on Debian Squeeze and trying to get a VM up and running, the hassle convinced me to go back to Ubuntu and their vm-builder package, which allow ones to create VMs relatively easy once the setup is complete. There is a vm-builder port for Debian, though that only works for building older versions of Ubuntu and I want to run the latest, Ubuntu 11.10 (Oneiric Ocelot).

Starting with a bare-metal Ubuntu 10.04 LTS (Lucid Lynx) 64-bit, below is the list of commands and instructions to install and set up the KVM. Details on these instructions can be read in the Ubuntu community documentation, KVM Installation and KVM Networking:


  • sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils

  • virsh -c qemu:///system list (To verify installation, should have no errors)

  • sudo apt-get install libcap2-bin

  • sudo setcap cap_net_admin=ei /usr/bin/qemu-system-x86_64

  • sudo vi /etc/network/interfaces
    • Original file:
      # The loopback network interface
      auto lo
      iface lo inet loopback

      # The primary network interface
      auto eth0
      iface eth0 inet static
      address 10.10.3.140
      broadcast 10.10.3.143
      netmask 255.255.255.248
      gateway 10.10.3.137

      # default route to access subnet
      up route add -net 10.10.3.136 netmask 255.255.255.248 gw 10.10.3.137 eth0

    • Modified file:
      # The loopback network interface
      auto lo
      iface lo inet loopback

      # device: eth0
      auto eth0
      iface eth0 inet manual

      # The primary network interface
      auto br0
      iface br0 inet static
      address 10.10.3.140
      broadcast 10.10.3.143
      netmask 255.255.255.248
      gateway 10.10.3.137
      bridge_ports eth0
      bridge_stp off
      bridge_fd 9
      bridge_hello 2
      bridge_maxage 12


      # default route to access subnet
      up route add -net 10.10.3.136 netmask 255.255.255.248 gw 10.10.3.137 eth0
      up route add -net 10.10.3.136 netmask 255.255.255.248 gw 10.10.3.137 br0

  • sudo /etc/init.d/networking restart

  • Running ifconfig lists the following interfaces br0, eth0, lo, virbr0

This completes the KVM installation and creation of a bridge for the VMs. Up next is replacement of the vm-builder. The one in the Ubuntu packages is faulty and also will not allow you to install Ubuntu 11.10 (Oneiric Ocelot). So I updated to the latest, downloading the source, building and installing it. The steps below can be found in this accepted answer:

  • sudo apt-get install bzr

  • sudo apt-get install epydoc (big install here, ~400mb)

  • bzr branch lp:ubuntu/vm-builder ubzr-vm-builder

  • cd ubzr-vm-builder

  • fakeroot debian/rules binary

  • sudo dpkg -i ../*vm-builder*.deb

With that, everything is installed and vm-builder is ready to run. The easiest way is to use a script so that vm creation can be set once and repeated as desired. The only changes required being hostname, ip and maybe memory. Obtain the Ubuntu 11.10 64-bit server iso and put it in the same place as the script. The directory I used is ~/vm/basekvm:

  • cd ~/vm/basekvm

  • sudo vi create_vm.sh
    • File:
      #!/bin/bash

      # Configure this before running the command
      HOSTNAME=myhostname
      MEMORY=2048
      IP=192.168.122.10
      # -- End of configuration

      vmbuilder kvm ubuntu \
      --destdir=/var/lib/libvirt/images/$HOSTNAME \
      --ip=$IP \
      --hostname=$HOSTNAME \
      --mem=$MEMORY \
      --suite=oneiric \
      --flavour=virtual \
      --arch=amd64 \
      --iso=/root/vm/basekvm/ubuntu-11.10-server-amd64.iso \
      --mirror=http://de.archive.ubuntu.com/ubuntu \
      --libvirt=qemu:///system \
      --domain=localdomain \
      --part=/root/vm/basekvm/vmbuilder.partition \
      --bridge=virbr0 \
      --gw=192.168.122.1 \
      --mask=255.255.255.0 \
      --user=myusername \
      --name=myname \
      --pass=mypassword \
      --tmpfs=- \
      --addpkg=vim-nox \
      --addpkg=acpid \
      --addpkg=unattended-upgrades \
      --addpkg=openssh-server \
      --firstboot=/root/vm/basekvm/fboot.sh \
      -o

  • sudo chmod 700 create_vm.sh

  • sudo vi fboot.sh (Optional)
    • File:
      # This script will run the first time the virtual machine boots
      # It is ran as root.

      # Expire the user account
      passwd -e administrator

      # Install openssh-server
      apt-get update
      apt-get install -qqy --force-yes openssh-server

  • sudo chmod 777 fboot.sh

  • sudo vi vmbuilder.partition
    • File:
      root 8000
      swap 4000
      ---
      /var 8000

  • cd ~/vm

  • ln -s /var/lib/libvirt/images/ images

The create_vm.sh is basically a template script. You can modify it to accept console input so that you don't need to go and edit the file values, that is left for another time. The symbolic link shows the directory where the VM disk images are located once created. Below is how you would use it to create a VM:

  • sudo cp basekvm/create_vm.sh create_vm_myvmname.sh

  • sudo vi create_vm_myvmname.sh. Edit the HOSTNAME, IP and MEMORY as desired

  • sudo ./create_vm_myvmname.sh

  • virsh start myvmname

And that's it! A VM has been successfully created and started up. Give it a few minutes and then you can log in through ssh using the information in the script. If the ssh is slow to connect, try this.

Notes about Delayed Job and Upstart

Upstart?

If you've been working with Linux systems for a while might have heard about upstart. As the upstart project’s home page describes, it's an event-based replacement for the System V init daemon.

Basically upstart takes care of starting and stopping services and ad hoc tasks when events happen such as when your system boots up or shuts down. And more!

Why Upstart?

As the saying goes: “don’t fix things it they aren’t broken,” so why would we bother with upstart in the first place when SysV init has been working fine for decades? Well, for one thing Ubuntu, Fedora, and others are moving to it! And upstart is event-based so you can easily write scripts to handle specific events. For example, it's trivial to run a script after the network services started. With init we have to rely on the static priorities of the scripts, which can get pretty messy and inflexible over time. Upstart is much more than what's described here and has a many more features as described in the project's feature highlights.

Delayed Job, RVM, and Upstart

Recently I needed to write a basic script to make sure that delayed_job gets started when the server is rebooted. Since I was using rvm to manage all the ruby processes on the server, I needed to make sure that rvm is properly loaded before actually running the delayed_job process.

There are enough articles out there about getting your scripts to run with RVM. However I keep running into problems with this very thing all the time!

Finally this is the script I came up with. It's a fairly simple script to get the delayed_job process started.





Notes


  • I decided to keep the /etc/init/file.conf simple and put most of the code in my external script file

  • the ‘task’ parameter can be used when you just want to do something simple like launch the daemon (and not creating a deamon for upstart to watch and restart...etc)

  • ‘start on runlevel [2345]’ tells upstart to start your task on runlevels 2,3,4, and 5

  • you can write your script inside the script…end script block. Here I'm just calling my external script

  • if you're running a Ruby script and have your Ruby set up via rvm be sure to include the two lines:

    • rvm_path=/home/deployer/.rvm

    • source "/home/deployer/.rvm/scripts/rvm"

    • this needs to be changed according to your rvm installation path

  • I'm also keeping a custom log that will have an entry for each time the task is run

25 November 2011

Posted by Irregular Zero

at 10:35 PM

0 comments

Labels: , , ,

KVM host with gateway guest using port-forwarding

Using the 3 rules listed here and below, a KVM host can forward all http and ssh traffic to a specified gateway guest VM:

iptables -t nat -I PREROUTING -p tcp --dport 80 -j DNAT --to-destination 10.0.0.1:80
iptables -t nat -I PREROUTING -p tcp --dport 22 -j DNAT --to-destination 10.0.0.2:22
iptables -I FORWARD -m state -d 10.0.0.0/24 --state NEW,RELATED,ESTABLISHED -j ACCEPT

To make it permanent, requires one to go through this page and use the following commands:

sudo sh -c "iptables-save -c > /etc/iptables.rules" (after applying the 3 commands above)
sudo vi /etc/network/if-pre-up.d/iptablesload

The /etc/network/if-pre-up.d/iptablesload file will have the following text:

#!/bin/sh
iptables-restore < /etc/iptables.rules
exit 0

The KVM host will now have VM creation as its sole focus. Ensure the host's ssh port have been changed to make it accessible from outside, otherwise it can be accessed from the gateway guest.

All redirection and VM access are transferred to the the gateway guest. The guest will need to install nginx so it can act as a http proxy for other VMs. All ssh access use the gateway guest as a stepping stone to the other VMs.

Following the 3 rules, it is found the traffic essentially loops back to the gateway guest. This makes it incapable of reaching the other VMs. Applying 1 more rule after the 3 above solves this. The rule accepts all packets from the VM ip range and does not do any forwarding:

iptables -t nat -I PREROUTING -p tcp --source 10.0.0.0/24 -j ACCEPT

01 September 2011

Posted by Yasith Fernando

at 7:39 PM

4 comments

Labels: , , ,

Deployment Automation Tools/Frameworks Comparison

At FM we have a problem. Each time someone start working on a new project (a web application, to be more specific) s/he needs to create a new VM and set it up and install a ruby (this is the most used stack at the moment, yay! :D) stack on it. Which is a pretty straight forward and well known process. However when you have to do it 2 to 4 times per month it gets pretty tedious. And it consumes valuable time. So... we are going to automate it! (yeah we should have done it a long time ago :)).

Common Considerations when choosing a tool

I think its important to consider the following things when choosing a automated deployment tool.

  • How many ready made 'recipes' are available. This usually goes in hand to hand with how widely adopted the tool is.
  • How do you like the DSL offered by the tool
  • How long is the learning curve
  • The scope of the tool? Does it address your set of problems?
  • Documentation


Things i were looking for

  • The tool doesn't need to be installed in to the server. It rather operates remotely through SSH
  • Should be able do the basic unix server setup stuff like: adding my private key, create a account...etc
  • Setup a full ruby stack with Apache/nginx, RVM, git, myslq with minimum hassel.

The ones listed above are what im looking for. However just because a tool dosen't already have a recipe that handles installing RVM it doesn't become a bad tool (duh!).

Tools/Frameworks that i tried out

Chef and Puppet

They seemed a bit too much for what i needed. I feel that i should be able to use a far more simpler tool with a shorter learning curve. However both of them are grate tools. I would like to work with chef in the future. But right now, all setting up a server (AFAIK chef needs a chef server to operate. However puppet doesn't necessarily need one) just to get this done sounds a bit intimidating.

Babushka

The author of Babushka: "However, I don't think of babushka as a deployment tool. I think of it as a "remember what I researched or learned the first time" tool, and part of that is deployment." In it's home page it is described as: Babushka is a tool for finding, running, writing and sharing recipes to automate things.

I actually like the philosophy of the tool. There are tons of things that i research about and forget in a very short time. It would be grate to write them down and reuse them easily in the future. And since this is not limited to deploying stuff it covers a wide spectrum of tasks that you might want to automate. Such as installing homebrew!. So once you 'script' something you can easily reuse it. And if something goes wrong babushka will try to identify the step that went wrong and point it out to you. So you can look at it and deal with it. Very useful!

However one tiny drawback of babushka is that you have to install it on whatever box that you are going to setup with it. Which is not (because they execute stuff through SSH) the case with tools like Git Pusshuten/Capistrano.

Sprinkle

Sprinkle on the outlook looked like exactly what i need! It's official description says: "Sprinkle is a software provisioning tool you can use to build remote servers with. eg. to install a Rails, or Sinatra stack on a brand new slice directly after its been created " This is what i need. Like most solutions discussed here it have a DSL so you could write deployment scripts to setup different servers with different services and packages. It has a nice collection of 'Installers' that let you install applications from various sources. From APT to running ad-hoc commands. Using these can make your life easier (as someone who writes scripts).

Most likely you would be using capistrano (its not coupled to capistrano) with sprinkle to deploy and setup remote servers from your workstation.

However after checking out the available deployment scripts. I feel that there is more room for more 'out of the box' scripts.

Git Pusshuten

As the official web page describes it: "It is a Git-based application deployment tool that allows you to define your environment by utilizing modules and provision your server with basic deployment needs."

I have use gitpusshuten in the past. And it does what its supposed to pretty well. When compared with Sprinkle and Babushka its not that widely used (assuming that github stats: # of forks and watchers are good indicators).

With gitpusshuten you will be issuing a series of commands in the process of setting up a server and deploy a app to it. It feels like something in between capistrano and sprinkle to me. In the end i decided on using gitpusshuten. Mostly because it already had 'recipes for most of the tasks that i needed.

Conclution

I hope to play around with Babushka and chef in the future. I feel chef is a good tool (in the long term) to invest (learn, use, contribute) in. Babushka too interests me, however i would imagine my self using it to automate things in general. For deployment stuff i would like too use something i can run through ssh.

26 July 2011

Posted by Anonymous

at 1:32 AM

3 comments

Coping with memory leaks in Android 3.0's ViewFlipper and Bitmap classes

Recently, I was baffled by a persistent memory leak in one of our company's Android 3.0 projects running on the Motorola Xoom. I spent a lot of time trying to figure out the source of the memory leaks without much luck.

After some digging around, I found a very helpful tool for the Eclipse IDE: MotoDev Studio, a free plugin provided by Motorola for Eclipse. You can download it here: http://developer.motorola.com/docstools/motodevstudio/

After a cursory look, one might think that this is simply an IDE helper for Android/Xoom development. It is, but it also makes Android/Xoom development much easier.

The plugin provides a tool called Memory Analyzer Tool (MAT) that generates memory usage reports; if you run this while debugging your Android apps, it gives you a full report on memory usage.

For my project, the Memory Leaks report showed that memory leaks were coming from two Android classes: ViewFlipper and Bitmap.

I couldn't find any easy fix for ViewFlipper; the only solution I could think of was to completely remove the ViewFlipper and write my own custom handler to do the view switches. This was annoying, but fixed the memory leak.

For Bitmaps, there were several things I learned in order to repair the memory leaks:

1. ImageView: do not bind images by using "android:src", use ImageView.setImageResource() instead.
2. View Switch: if you have many ImageView objects being switched in and out, remove them all and only use one inside your code; remove the ImageView object when you switch to another kind of View, then add it again when you need ImageView again.
3. Use the willNotCache() method in ImageView.
4. Remember to set bitmaps to null or release them when finished using them, then push them to the GC.

There are still a lot of things you need to be careful of when using bitmaps, but in sum: remember to always release the memory used for the bitmaps when it's no longer needed.

After 3 days work, and I successfully reduced the memory use for our app from 40MB to 8.7 MB.

- Winkey

05 July 2011

Posted by Anonymous

at 5:46 AM

7 comments

Software Estimation: Traditional and Agile approaches

Software estimation has historically been a difficult task for numerous reasons, yet it's also one of the most important steps in a software development project that typically garners nowhere near the attention or thought that it deserves.

Curious as to the academic consensus on software estimation, I surveyed a variety of software texts on the topic; from classics: Mythical Man Month (MMM), Code Complete (CC), and Rapid Development (RD), to more current texts: Practices of an Agile Developer (POAAD), The Pragmatic Programmer (PP), and Agile Software Requirements (ASR).

Surprisingly, and not so surprisingly, there is quite a bit of overlap between older and newer approaches to software estimation.

Why is software estimation so hard?

Code Complete (CC) begins its chapter on software estimation with, "Managing a software project is one the formidable challenges of the late 20th century. . .the average large software project is 1 year late and 100% over budget." Consoling words, and they resonate with most developers who have done professional software development. But why is software estimation so hard?

  • The simplest answer is: software is extremely complex. Software has lots of moving parts, and these parts are often constantly changing or being updated. Not just small parts either (like a new library of module), but large parts as well - new languages, new frameworks, new operating systems, new hardware - all affect the complexity of the task - and add to the difficulty of coming up with an accurate estimate. It is not unusual for a developer to use a few, if not many, modules that she's never used before during a project.
  • Software quality varies. Writing a program vs. writing a "programming product" can vary by as much as 3X (MMM).
  • Developer skill varies. One study indicates that programmer productivity can vary as much as 10:1 (MMM). Some say as much as 20:1.
  • The client's understanding of his product varies. I haven't read a lot of authors writing about this, but, in my experience the client's grasp of his own product definition hugely affects the pace of development. The clearer the client's understanding of his product's details and design, not surprisingly, the faster the pace of development.
  • Developer optimism. "The first false assumption that underlies the scheduling of systems programming is that all will go well." (MMM)

Why do we want good estimates?
"Rapid Development" makes a salient point: (1) good estimates are more than a means to make the client happy. Good estimates also: (2) reduce development costs internally (e.g. reducing scheduling inefficiencies: overlap and overrun), and (3) they provide a sustainable pace that helps developers avoid burnout, making them more productive in the long-term.

When estimation fails. . .what can you do?
There are a few common fixes to a software schedule that's falling behind. It turns out that some of them, aren't really 'fixes' at all:
  • MYTH: We'll fix it in the end. "One survey of over 300 software projects concluded that delays and overhangs generally increase toward the end of the project. Projects don't make up lost time later; they fall further behind." (CC)
  • MYTH (largely): Add more people. While it may seem counter-intuitive, Brook's Law (MMM) asserts that "Adding manpower to a late software project makes it later." In the words of Code Complete: "New people need time to familiarize themselves with a project before they can become productive. Their training takes up the time of the people who have already been trained. And merely increasing the number of people increases the complexity and amount of project communication." In reality, for a large project adding people during the project may help - less likely so, for smaller projects.
  • Reduce scope. This is probably the most sensible and easiest solution of all. It can take many forms: dropping a feature, delaying performance tuning, implementing a crude version of a feature, to be fleshed out in a future release.

I. Traditional approaches to improving estimates:
The following are tips and suggestions for improving the accuracy of estimates. Labeling these approaches "traditional" is not to imply that these approaches are outdated or somehow lacking; in fact, a lot of the following techniques have been absorbed by Agile estimation techniques.
  • Formalize requirements. This may sound like a 'duh!' statement - but it is often overlooked. (CC) Why? Oftentimes, a client wants to build something without knowing the outlying details of what she's having built; a client can have a solid sense of her product definition from a high-level, but be fuzzy on the details; she may expect that development will help flesh out the details. In this case, rapid prototyping, may be the best approach - so that the client and developer can both know, with greater detail, what they are building, as quickly as possible.
  • Make time for estimates: "Rushed estimates are inaccurate estimates. . ." (CC). Software estimation can be a mini-project unto itself and, if time permits, should be treated as such. While this may sound like a luxury and an 'unnecessary' upfront cost, its cost can pale in comparison to the potentially costly overruns that will likely arise due to poor estimation. "Until each feature is understood in detail, you can't estimate the cost of a program precisely. Software development is a process of making increasingly detailed decisions." (RD)
  • Allow developers to make the estimates. (RD) "We have to mention a basic estimating trick that always gives good answers: ask someone who's already done it." (PP)
  • Use several estimation techniques, and compare the results. (RD)
  • Re-estimate periodically: for long-term projects it might make sense to schedule re-estimation periods, factoring in newly determined velocity. (RD)
  • Track historical data: documenting estimates and actual completion times, while often seen as an annoyance, can provide sharper metrics for estimating a future set of tasks. A set of historical data is required to accurately define future estimates. (RD)

II. The Agile approach to estimation (see "Agile Software Requirements")
Yes, all these tips and suggestions may sound daunting, maybe even other-worldly. Given a tight schedule - who has time to do all this? To the rescue, in walks the Agile approach. . .

To the outsider, Agile methods can sound like a justification for no documentation, no planning, no coordination; not true. It is true that Agile teams often abhor long-term estimation. . .not for fear of commitment, but because they understand that long-term estimates are most often inaccurate, so - why bother? One of the main reasons that Agilists rejects traditional project estimating (e.g: identify tasks, estimate tasks, sum tasks, then build a Gantt chart) is that it "never actually worked for software projects." (ASR)

Basic Concepts of Agile estimation:
  • User Stories: a brief statement that explains a feature, usually based upon a cause and effect statement. Example: User X does Action Y and Result Z happens.
  • Story points: numbers (integers) that indicate the 'bigness' of a single User Story - the higher the number the bigger the task. Using techniques below, each User Story, is assigned Story Points. There are four aspects to consider when assigning Story Points to a User Story:
  1. Knowledge and comprehension of the User Story.
  2. Complexity: how hard it will be to implement
  3. Volume: how much actual work there is to do
  4. Uncertainty: unknowns that might effect the estimate
Any numerical scale (1-5 or 1-100) can be used as a metric for Story Points - what matters is that they are 'numerically relevant': a two-point story should take 2x as long as a one-point estimate. Another approach to Story Points: perhaps more suited for larger projects, is to use IDDs (Ideal Developer Days - a theoretical perfect day of work) as Story Points.

A few approaches to Agile estimating:
  • Work first, estimate after. "Let the team actually work on the project, with the current client, to get realistic estimates." This may seem backwards, but it is actually a very practical way to obtain an estimate grounded in reality (not some potentially flawed algorithm). (POAAD) Understandably, not all clients may not be open to starting out a project without any scheduling estimates.
  • Planning Poker. This is an online game in which developers bid on User Stories, until an entire backlog has been estimated. These estimates can then be used to define a developer's estimation accuracy as well as help developers' hone their estimation skills over time. http://www.planningpoker.com/ (ASR)
  • Tabletop relative estimation. This is a simple approach to estimation whereby a development teams orders index cards (each with a single User Story on them) on a table by Story Point value. The team can discuss each story and place smaller stories to the left and larger stories to the right, in order of complexity. Each column has a different Story Point value. By the end of the estimation session (pretty quick), there can be a good relative sense of the scale regarding the User Stories. (ASR)

Advice: Keep estimating brief. . .
Good news, according to ASR, there are diminishing returns with repeated attempts at estimation: after having a team provide three estimates, the estimates do not gain any accuracy. More is not better, here.

Velocity:
After estimations have been generated by assigning Story Points to User Stories, a sense of scope is acquired, but still no sense of how long development will take. How is a sense of development speed (aka Velocity) determined?

Answer: By determining how many Story Points a team or person can complete during a timeboxed iteration of User Stories.

Example: Have two teams work on the same set of User Stories (after having defined their Story Points) for a 2 hour timebox (iteration). Most likely, when the Story Points for each team are summed at the end of the iteration, there will be two different scores. These scores indicate the Velocity of each team.

III. Conclusion:
Having surveyed a few classic and current approaches to software estimation, it's clear that the problems surrounding software estimation have not changed: software estimation has and will always remain a complex task, most likely, as long as writing software remains a complex task.

There are many techniques to minimize the problems associated with poor software estimation: traditional methods focus on defining tasks, estimating tasks and summing the estimates; Agile methods avoid a waterfall technique and assert that the only way to gain an accurate estimate is to begin the work and estimate from there, and re-estimate as the project continues.

Agile estimation embraces the complexity of software and tackles the difficulties of estimation with a lightweight real-world process:
  1. write User Stories
  2. have developers estimate with Story Points
  3. closely track actual iterations of development tasks (User Stories) to determine developers' Velocity
  4. finally, arrive at estimates that are based on real-world development; they should become more accurate over the course of the project as work history progresses

01 June 2011

Posted by Anonymous

at 10:19 AM

318 comments

Labels:

Using RABL in Rails JSON Web API

Let's use an event management app as the example.

The app has a simple feature: a user can add some events, then invite other users to attend the event. Its data are represented in 3 models: User, Event, and Event Guest.



Let's say we are going to add a read-only JSON web API to allow clients to browse data records.

Problems

Model is not view

When working on a non-trivial web API, you will soon realize that models often cannot be serialized directly in a web API.

Within the same app, one API may need to render a summary view of the model, while another needs a detail view of the same model. You want to serialize a view or view object, not a model.

RABL (Ruby API Builder Language) gem is designed for this purpose.

Define once, reuse everywhere

Let's say we need to render these user attributes: id, username, email, display_name, but not password.

In RABL, we can define the attribute whitelist in a RABL template.

# tryrabl/app/views/users/base.rabl
attributes :id, :username, :email, :display_name

To show an individual user, we can now reuse the template through RABL extends.
# tryrabl/app/views/users/show.rabl
extends "users/base"
object @user

## JSON output:
# {
#     "user": {
#         "id": 8,
#         "username": "blaise",
#         "email": "matteo@wilkinsonhuel.name",
#         "display_name": "Ms. Noe Lowe"
#     }
# }

Here's another example to show a list of users.
# tryrabl/app/views/users/index.rabl
extends "users/base"
collection @users

## JSON output:
# [{
#     "user": {
#         "id": 1,
#         "username": "alanna",
#         "email": "rubie@hayes.name",
#         "display_name": "Mrs. Gaylord Hoeger"
#     }
# }, {
#     "user": {
#         "id": 2,
#         "username": "jarrell.robel",
#         "email": "jarod@eichmann.com",
#         "display_name": "Oran Lebsack"
#     }
# }]

The template can be reused in nested children as well, through RABL child.
attributes :id, :title, :description, :start, :end, :location
child :creator => :creator do
extends 'users/base'
end

## JSON output:
# {
#     "event": {
#         "id": 7,
#         "title": "Et earum sed fuga.",
#         "description": "Quis sed ..e ad.",
#         "start": "2011-05-31T08:31:45Z",
#         "end": "2011-06-01T08:31:45Z",
#         "location": "Saul Tunnel",
#         "creator": {
#             "id": 1,
#             "username": "alanna",
#             "email": "rubie@hayes.name",
#             "display_name": "Mrs. Gaylord Hoeger"
#         }
#     }
# }

Join table rendered as subclass

I notice a recurring pattern in two recent projects. For instance, in this example, from the client's point of view, Event Guest is basically a User with an additional attribute: RSVP status.

When querying the database, usually we need to query the join table: event_guests.
class GuestsController < ApplicationController
def index
@guests = EventGuest.where(:event_id => params[:event_id])
end
end

But when rendering, the result set needs to be rendered as a list of Users. RABL allows you to do that easily, using its glue feature (a weird name though :).
# tryrabl/app/views/guests/index.rabl
collection @event_guests

# include the additional attribute
attributes :rsvp

# add child attributes to parent model
glue :user do
extends "users/base"
end

## JSON output:
# [{
#     "event_guest": {
#         "rsvp": "PENDING",
#         "id": 3,
#         "username": "myrna_harvey",
#         "email": "shad.armstrong@littelpouros.name",
#         "display_name": "Savion Balistreri"
#     }
# }, {
#     "event_guest": {
#         "rsvp": "PENDING",
#         "id": 4,
#         "username": "adelle.nader",
#         "email": "brendon.howe@cormiergrady.info",
#         "display_name": "Edgardo Dickens"
#     }
# }]

The complete Rails example code is available at github.com/teohm/tryrabl.

25 May 2011

Posted by Anonymous

at 12:27 PM

1 comments

Labels: ,

Using JQuery Validation in Rails Remote Form

In a recent project, I was trying to use JQuery Validation in an earlier version of Rails 3 remote form (jquery-ujs). They didn't work out well in IE.

After experimenting with the latest jquery-ujs and making an embarrassing mistake, it turns out that the issue is resolved in the latest version.

(Mistake: You may notice I removed a previous post about this topic, where I mistakenly concluded the latest jquery-ujs is not working with JQuery Validation. Thanks to JangoSteve for pointing it out. The post was misleading, so I believe it's best to remove it to avoid confusion. :-)

Get the latest jquery-ujs

There are 2 reasons to use the latest jquery-ujs:

  1. it has a patch that fixes the issue (see issue #118).
  2. it exposes an internal function that we may need -- $.rails.handleRemote() (see more details)

Working example

The example is tested with:

Try run the example page on IE7/8: /quirks/jquery-validate-ujs-conflict/jquery-ujs-latest.html

When using submitHandler in JQuery Validation

If you are using JQuery Validation's submitHandler(form) function, you need to call $.rails.handleRemote( $(form) ) function manually, so that it submits the form via XHR.

$('#submit_form').validate({
submitHandler: function(form) {
// .. do something before submit ..
$.rails.handleRemote( $(form) ); // submit via xhr

// don't use, it submits the form directly
//form.submit();

}
});

19 April 2011

Posted by Anonymous

at 2:10 PM

1 comments

Labels:

Managing multiple Grails versions in development

In Grails development, it's not uncommon to maintain several projects with different Grails versions.

It's a PITA to switch between Grails versions during development because it requires the GRAILS_HOME environment variable to be updated, to point to the correct Grails directory.

Rescue My Ass
I added 2 new bash commands to make my life easy:

  • grls - list all available installed versions.
  • gr - set GRAILS_HOME to the specified version.

How to Use It
Beech-Forkers-MacBook:~ huiming$ grls
1.1.2
1.3.2
1.3.4
1.3.6

Beech-Forkers-MacBook:~ huiming$ gr 1.3.2
Beech-Forkers-MacBook:~ huiming$ grails
Welcome to Grails 1.3.2 - http://grails.org/
Licensed under Apache Standard License 2.0
Grails home is set to: /Users/huiming/work/tools/grails


How I Implemented It
First, I keep all Grails installations under a same directory:

Beech-Forkers-MacBook:~ huiming$ ls -d1 work/tools/grails-*
work/tools/grails-1.1.2
work/tools/grails-1.3.2
work/tools/grails-1.3.4
work/tools/grails-1.3.6

Then, add the commands into my ~/.profile file:

TOOLS=~/work/tools

function gr {
rm -f $TOOLS/grails && \
ln -s $TOOLS/grails-$1 $TOOLS/grails
}

alias grls="ls -d $TOOLS/grails-* | xargs basename | sed 's/grails-//g'"

export GRAILS_HOME=$TOOLS/grails
PATH=$GRAILS_HOME/bin:$PATH

Thanks to Jeff, as the idea is largely based on his solution.

07 February 2011

Posted by Irregular Zero

at 1:12 PM

0 comments

Labels:

A Certain Minimal QR Scanner iPhone app

A QR Code is similar to a barcode, except it contains more information and looks like a pixellated square Rorschach test.

There are a number of free QR readers in the appstore like NeoReader and RedLaser. I especially like the scanning GUI for RedLaser, which seems more streamlined than any of the apps I have tried.

A number of these apps use the open-source ZXing ("Zebra Crossing") scanner library. The library is in Java but has a number of ports which include iPhone.

To start things off, you need to download or checkout the source. Create a new "View-Based Application" project in Xcode. Inside the iphone folder of the source, follow the README on how to include the ZXingWidget project into yours.

Pay attention to the instructions, especially the direct dependency, header search path and the fact the file with the ZXing has to be a .mm instead of .m. If it does not build, you're probably missing something. You can look at the sample projects to see how they include the widget.

One last thing before moving on to the code, put the beep-beep.aiff file from the ScanTest project into your project. This is to get audio confirmation of a scan.

Inside the sole viewController of your project:

#import "ZXingWidgetController.h"
#import "QRCodeReader.h"
#import "ResultParser.h"
#import "URLResultParser.h"
#import "ResultAction.h"

- (void)viewDidLoad {
[super viewDidLoad];
[ResultParser registerResultParserClass:[URLResultParser class]]; 
}

- (void)viewDidAppear:(BOOL)animated {
[super viewDidAppear:animated];
ZXingWidgetController *widController =
[[ZXingWidgetController alloc] initWithDelegate:self showCancel:NO OneDMode:NO];
QRCodeReader *qrcodeReader = [[QRCodeReader alloc] init];
NSSet *readers = [[NSSet alloc] initWithObjects:qrcodeReader,nil];
[qrcodeReader release];
widController.readers = readers;
[readers release];
NSBundle *mainBundle = [NSBundle mainBundle];
widController.soundToPlay =
[NSURL fileURLWithPath:[mainBundle pathForResource:@"beep-beep" ofType:@"aiff"] isDirectory:NO];
[self presentModalViewController:widController animated:YES];
[widController release];
}

#pragma mark -
#pragma mark ZXingDelegateMethods
- (void)zxingController:(ZXingWidgetController*)controller didScanResult:(NSString *)resultString {
[self dismissModalViewControllerAnimated:YES];
ParsedResult *parsedResult = [[ResultParser parsedResultForString:resultString] retain];
NSArray *actions = [[parsedResult actions] retain];

if ([actions count] == 1) {
ResultAction *theAction = [actions objectAtIndex:0];
[theAction performActionWithController:self shouldConfirm:YES]; 
} else {
UIAlertView *alertView = [[UIAlertView alloc] initWithTitle:@"Text Found:"
message:resultString
delegate:nil
cancelButtonTitle:@"OK"
otherButtonTitles:nil];
[alertView show];
[alertView release];
}
}

The code in viewWillAppear is lifted off from the sample projects. This sets up the scanning video camera with the appropriate reader. If a scan is successful, the delegate method didScanResult will execute. The result is parsed to see if it is a URL. You set which parser to use in viewDidLoad. A parsed result can have default actions associated with it, the URLResultParser opens up the url in Safari as default. Otherwise the result is treated as text and displayed.

This app can now scan QR codes and open up URLs in Safari. There are a number of other things you can add to this, eg you can switch out the ResultParser with a UniversalResultParser that includes all the parser classes. You should take a look in the Classes folder of the ZXingWidget project to see what is available.

04 February 2011

Posted by Yasith Fernando

at 11:15 AM

0 comments

Labels: ,

Honeycomb

Google is getting ready to launch Android 3.0, aka Honeycomb, the latest version of the Android OS that is aimed at tablets. It has a lot of new goodies that will radically change the way we will design and develop applications, especially user interfaces.

After going through some articles and documents that were sourced from the Internet, I am compiling this blog post as an overview of the main UI changes introduced in the 3.0 API. And to see how this might help application developers like you and me write applications that could potentially (nobody knows for sure whether honeycomb will run on smartphones or not. But it seems to support smaller screen sizes as well. I am been optimistic here! ) run on smart phone and tablet devices. Even if this is possible it might not be a reality in the near future. As google is yet to announce any smartphones with the new branch of the OS.

Ouch! it seems that is not going to be true anytime soon: Google Says Honeycomb Will Not Come To Smartphones.

Honeycomb is the first release of the 3.x branch. Right now, Google seems to be reserving the 3.x branch for tablets and the 2.x branch for smartphones. As for the 2.x branch, Android 2.4 (Ice Cream) will be the next release. Given that maintaining two platform-specific versions of the OS doesn't seem like a long-term strategy, convergence will probably happen at some point in the future.

Fragments


This seems to be one of the most exciting new features of Honeycomb. The API documentation defines a Fragment as “...a piece of an application's user interface or behavior that can be placed in an Activity”. Fragments are basically small units or modules that can together make up an Activity. Perhaps the coolest thing is that you can change them dynamically at runtime!

Modularity is almost always good! (caveat: only if used with some common sense). You can create different layouts by mixing up Fragments. An Activity can be broken down into different regions and then designed separately. It also allows developers to create interfaces that let users choose among different layouts at runtime. Because Fragments can be added in or removed dynamically, this should be fairly easy to do.

Google has cited another use for Fragments: changing the layout of an application depending on the orientation of the screen. So you can create UIs that will make maximum use of screen space depending on the screen orientation. For example, if a given device has a lower resolution than a tablet, a developer can decide to use a Fragment (or a set of Fragments) that are designed to better fit a smaller screen size and resolution. Or, if the device that you are using has a larger, higher resolution screen, you can conditionally use a different set of Fragments or simply a different mix of Fragments to optimize the interface. I have not really tried using Fragments in a real world application, but I think they could be very helpful in making applications more adaptable to different device form factors and somewhat independent of hardware platforms. However, we should not expect this to be a perfect solution to make apps device and resolution independent, as it will involve many other considerations such as sizing and resolution of embedded image assets and other content types.

Back Stack

Fragments that are displayed one after another can be added to a stack by modifying the Fragment inside a transaction. This will help users go back to whatever Fragment they came from by using the standard back button. Such UI statefulness can greatly improve user experience.

ActionBar

One can argue that the space taken by the title bar is a waste on small handheld devices like tablets or smartphones. The action bar replaces the title bar and displays the title for a window while harbouring various other widgets, menus, and tool buttons.

Browsing through the API documentation for ActionBar, you can see some interesting classes and constants like ActionBar.Tab and NAVIGATION_MODE_LIST that suggest the ActionBar can be used for navigation and can contain embedded menus, buttons, and tabs. When used with methods such as setCustomView() to set a custom view to appear inside the ActionBar, this can be used to implement things like a search function for an eBook reader app.

A breakdown of some other highlights:
  • Multi-select, clipboard, and drag-and-drop
  • Richer notifications
  • Live HTTP streaming
  • New APIs for Bluetooth A2DP and HSP
  • Hardware accelerated rendering
    • New animation framework (new android.animation package)
    • Hardware-accelerated OpenGL renderer for 2D graphics
    • A new 3D rendering engine called Renderscript (new android.renderscript package)
  • Ability to leverage multi-core processors
  • DRM framework (new android.drm package)

Playing with the SDK
There is no substitute for playing with the SDK yourself. You can get it here. If you already have a previous version of an Android SDK installed, you can install Honeycomb via the Android SDK manager. Look for Android 'Honeycomb' Preview SDK along with the documentation bits. Have fun!

Sources
  • Android 3 Highlights - Google
  • First look: Honeycomb APIs power tablet-friendly Android apps - Ars Technica
  • Google posts Android 3.0 Honeycomb SDK preview - intomobile