Code Sushi

tasty bits of code rolled and sliced by Mihai

Managing iOS and Android Localized Strings

If you are developing an app for iOS and Android, and if you need to translate the app in a foreign language, you will be faced with the decision of translating the user-facing strings in the two apps independently or together. Translating the apps independently is the easier option as far as development effort is concerned. On the other hand, you can take advantage of the fact that roughly the same strings will be used on the iOS and Android versions and save costs on the translation side. Translating the apps together comes with some difficulties, as we’ll see in this article.

Before we dive deeper, I’ll just want to note that at this point you should already be familiar with string internationalization on iOS and Android. If not, I recommend you start by reading the iOS Internationalization and Localization Guide and Supporting Different Languages for Android. There are of course other resources except strings that need to be considered when localizing an application, but this article will focus only on strings.

Making tradeoffs

On the iOS side, Xcode 6 came with several improvements to its internationalization process. genstrings and ibtool no longer have to be used to extract strings from code and interface builder files. Instead, this is now achieved in a single step by exporting to XLIFF. genstrings and ibtool were particularly annoying to use as they were not capable of merging the generated strings files with previous outputs. The new export/import feature solves this problem by extracting the localizable strings to an external XLIFF file that is not directly used by the app and importing and merging localized strings from XLIFF files into strings files.

Other problems, however, remain unsolved: base internationalization offers no way of adding comments and all strings, except those used by base internationalization, are indexed by the string itself, in the projects’ development language. The last point is occasionally problematic if you have a string that’s used as a homonym in the development language. Take for example the English word share that can be used as the verb to share or as a noun as in a stock share. In another language you will likely need two different words for the two meanings so in this case it’s not possible to use a single string key.

But for cross-platform projects, the biggest issue is that iOS localizable strings files generated as part of base internationalization, or even the XLIFF exported output, are not readily usable on Android (and vice versa). Base internationalization string files contain key-value pairs where strings are indexed by Interface Builder object ids:

1
"CJ9-Yv-rto.text" = "Some text";

This type of ids are practically meaningless in an Android project and you definitely don’t want to write code that uses them as resource names. For this reason, base internationalization can’t be used if we hope to use a single set of localizable strings on iOS and Android.

A typical internationalization-localization workflow

Let’s briefly look over the typical activities performed when internationalizing and localizing an app developed for iOS and Android.

During development localizable strings are used instead of hardcoded strings. This means that application code loads user-facing strings based on pre-determined keys from localized string files. Initially no translated strings are available so the only locale used is that of the projects’ development language.

At some point, all localizable strings, from both platforms, need to be gathered and sent to translators. Commonly the two platforms will use roughly the same set of strings. To send out a single file containing all strings we need to come up with a common format and merge the iOS and Android strings into it. This can be either one of the platform native formats or an entirely different one. Either way we’ll need to take into account that string formatting characters like %@ and %s will be different for the two platforms. While translators are busy doing their work, application development continues and new strings are added while others may be deleted or modified.

When the translated strings are received back they need to be merged with the new, untranslated strings. They also need to be converted back in platform specific formats (strings files for iOS, XML for Android). Sometime in the future, the remaining untranslated strings need to be sent again to the translators.

These steps are often repeated multiple times. Crowdsourced translation services make translating small and frequent batches of strings easy. Doing this will help ironing out any issues related to translation well before the release date. Naturally, when the application is ready for release all strings should be translated.

For medium to large projects there will be a significant number of strings to work with. Some of the steps, like the merging into and out of the common strings format, will be quite labor intensive. Automating the process is definitely a necessity. Fortunately there is an open-source command line tool called twine that can help us achieve this.

Preparing you localizable resources for use with twine

Like XLIFF export/import in Xcode, Twine works on a single master file that holds all your localizable strings with translations in the available locales. To import the existing strings from the iOS and Android projects start by creating an empty text file:

1
touch strings.txt

Next, import localizable strings from the iOS project:

1
twine consume-all-string-files strings.txt PathToIOSProject --developer-language en --consume-all --consume-comments

Now do the same for the localizable strings in the Android project:

1
twine consume-all-string-files strings.txt PathToAndroidProject --developer-language en --consume-all --consume-comments

This will merge the strings from the two projects, by string keys, in a common master file. If you were already using the same string keys on both iOS and Android you’ll end up with no duplicates. The command also identifies all the available locales based on project folder structure and will import any existing translations as well.

Notice that when native strings are consumed, string formatting special characters, like %s, will be changed to a common format. Also notice that consume-all-string-files will search through all iOS strings files and all Android XML resource files. While this works great for the initial creation of the master file, in practice you’ll save yourself a lot of effort if you work with a single native strings file, per platform, per locale.

The workflow with twine

Let’s look again at the typical internationalization and localization actions, this time with twine helping us along the way.

Adding, modifying and deleting strings

When you need to add a new localizable string during development, simply add it to the master file. In other words, never manually edit the platform specific strings files. These files will be instead generated automatically as we’ll see next. Modifying and deleting strings is a bit problematic, since such changes can only be performed when a single platform is impacted. You can’t for instance delete a string if the iOS version of the app no longer needs it, since it may still be in use on Android.

Updating iOS and Android localization files from the master file

Updating the localizable strings files is as simple as this:

1
twine generate-all-string-files strings.txt PathToProject

Again, twine will use project folder structure to determine the locales the project is using and will update the necessary files for all the locales. It is recommended to include this command in your build process so that if the master file is modified the localization files can be updated with a simple build command. In Xcode you can to this by adding a custom Run Script build phase. Just make sure you order this build phase to be executed before the Copy Bundle Resources phase.

Sending out strings to the translation team

When you are ready to send a batch of strings out for translation, you can send the single master file. The twine master file uses a human-friendly, easy to read syntax, inspired by Windows INI files. Translators should have no problem filling in the missing translations. Optionally, you could export to Gettext PO files if you need to provide you translation team with a more commonly used file format.

Merging back updates from the translation team

Merging back translated strings works similarly with importing strings from platform specific strings files. Here’s an example that merges a file received from translators (localized.po) with the master file (strings.txt).

1
twine consume-string-file strings.txt localized.po --lang es

Finally, all that remains is to update the iOS and Android localizable strings files as we’ve seen before.

Conclusions

In the end, we have to wonder if it’s really worth translating the iOS and Android versions together. Twine helps automate many of the labor intensive parts of the process but coordination between developers building the iOS and Android apps is still needed, especially when modifying and deleting existing strings. Translation effort is certainly reduced, but this has not come for free. The is no clear-cut answer and everything depends on the particularities of your situation. The factors that will swing the balance are the number of languages that will have to be supported, team composition (will the same developers work on both platforms or not?), number of localizable strings, feature overlap and the relative cost of development versus translation.

Using UIViews as OpenGL Textures for Custom View Controller Transitions

In a previous post I’ve demonstrated how UIKit can be used with a physics engine. For this post I’m going to continue on the topic of mixing UIKit with technologies from the field of game development and show how UIViews can be used as textures in OpenGL.

As a practical example I’ve put together a sample app that simulates a UINavigationController where view controller transitions are animated using a cube instead of the standard slide animations. Go ahead and download the complete source code. You can see it in action in the video below:

Before we go any further I feel a disclaimer about the choice to use OpenGL in this example is needed. I have to admit it is entirely possible to construct simple 3D animations like this one using nothing but Core Animation – which being a higher level technology is easier to use. In fact CATransition already implements an undocumented “cube” animation identical with what I’ve built here from scratch. The only purpose of this example is to present the steps and challenges you need to go through in order to transform UIViews into OpenGL textures.

Drawing the view to an image

The function that makes this entire exercise possible is CALayer’s renderInContext: method. As the name suggests, this will draw a CALayer into a CGContext. By drawing into a CGBitmapContext (which is a subtype of CGContext). We can then grab the image data and pass it to OpenGL using the glTexImage2D function. One point to remember here is that neither Core Graphics nor OpenGL know anything about points – they only work in pixels. So whenever we transfer UIKit sizes and positions into Core Graphics or OpenGL we always need to take into account the content scale factor.

You’ll find the code implementing all this in the TextureAtlas class. Notice the code there actually draws two views in a single texture. The input views will be the views of the view controllers involved in the transition. The reason why we render the views side by side in the same texture is that shaders (in our case thinly wrapped by GLKBaseEffect) normally draw a single texture in a draw call. In our case this trick simplifies the code a bit, but when rendering more complex scenes it also helps to improve performance if you manage to group up your drawing needs into fewer draw calls. This is because sending data to the graphics pipeline is usually the performance bottleneck and not the rendering itself which runs on optimized hardware.

Second think I want to point out here is the commented code I left in TextureAtlas. If you uncomment it you will see the texture saved as an image file in the application’s Documents folder. If you open up the file you will see it’s flipped on the y-axis. The happens because by default CGContext’s coordinate system defines the origin point to be in the lower-left corner. That may come as some surprise if you used CGContext in other places like in UIView’s drawRect: where the y-axis is actually flipped for your convenience to match the coordinate system of UIKit. But as OpenGL uses the same coordinate system as Core Graphics, no extra handling is needed in our case.

The right time to rasterize the views

Let’s take a look at the structure of the sample app. The root view controller is NavigationController, a custom view controller container that mimics the native UINavigationController. We have AnimationViewController, a wrapper for the transition animation that takes two views as parameters to kick off the animation. Also there are two dummy view controllers: FirstViewController and SecondViewController.

NavigationController is initialized with an instance of FirstViewController, which immediately gets added as a child view controller to the navigation controller. Now let’s say we need to present an instance of SecondViewController, using the pushViewController: method. What the navigation controller should do here is add the SecondViewController as a child so we can grab it’s view for the animation, add an AnimationViewController as a child, remove the FirstViewController, wait for the animation to complete and finally remove the AnimationViewController.

This seems straightforward but there is one thing we need to take care of. Once the animation starts, the views from the animated controllers are rasterized to textures and no future updates to them will be visible until the animation is done. View layouts (either triggered by autolayout or autoresize masks) are performed on the main thread but asynchronously from view initialization. If we simply grab the view from the SecondViewController as soon as it’s instantiated, we might end up with a view that’s the wrong size. Instead we can use UIViewController’s transitionFromViewController:toViewController:duration:options:animations:completion: method to get us out of this problem. We can leverage the fact that view layout is done by the time this method’s animation block parameter gets called.

Putting it all together

Once we have the textures ready, what’s left is the fairly standard task of creating an OpenGL scene for the animation. Namely we’ll need to create a polygon mesh for a cube (complete with texture coordinates and surface normals for lighting), rotate the cube and call a delegate when the cube has completed 90 degrees of rotation. You can have a look at how this is implemented in the Cube and AnimationViewController classes. GLKit really helps simplify the effort here. It can take care of iOS specific things like setting up a UIView for OpenGL rendering, setting up an animation loop tied to the refresh rate of the device’s screen or pausing the animation loop when the app goes into background. You can also use it for out of the box shaders and for linear algebra tasks common in 3D applications.

StackOverflow Tag Popularity by Country

StackExchange provides a great little tool called Data Explorer that lets any user run readonly SQL queries on their Q&A sites. I decided to put it to the test and use it to find how many questions got posted from each country for a set of tags. The location information of each question author is declared voluntarily in freeform format. This means the data collected is pretty noisy, if available at all. To map the raw location to a canonical country name I used MapQuest’s open geocoder – one of few such services not placing restrictions on the number of daily requests. Below is the resulting map showing the breakdown of questions by country and tag:

Using Physics in a UIKit Based Application

Some of the popular game engines available on iOS like cocos2d and Unity come bundled with physics engines so oftentimes the first thought when you want to add a bit of physics to your app is that you need to use one of the fancy game engines you’ve been hearing so much about. In this post I’ll walk you through using the Box2D physics engine without using any game engine or OpenGL.

Let’s start with a simple example. Create a single view iPad application project:

Choose a project template Choose project options

Next we need to setup Box2D. The easiest way to do this is via CocoaPods. CocoaPods greatly simplifies library dependency management for iOS and OS X projects. If you haven’t used it before you need to run the following commands in the terminal to set it up:

1
2
$ [sudo] gem install cocoapods
$ pod setup

Library dependencies are declared in a text file named Podfile which needs to be placed in the same directory as your xcodeproj file. Create the file and paste in the following:

1
2
platform :ios
pod 'box2d', '~> 2.3'

This declares you are targeting iOS and want to use the Box2D library using either version 2.3 or any other minor version up to 2.4 but not including 2.4. If you’re new to CocoaPods I encourage you to read about the other dependency declaration options available.

Final step, point your terminal to your Xcode project’s folder and run

1
pod install

This command will download the Box2D library and in addition create an Xcode workspace that contains your initial project file and a new Pods project where all your dependencies reside. From this point on you should always use the generated workspace instead of the initial project and you’ll be good to go.

Let’s move on to the good stuff and lay out some core Box2D concepts we’ll be using in this tutorial. Box2D simulates interaction between rigid bodies. Bodies have a position and a set of fixtures. Each fixture links a body with a shape and a few physical properties like mass, friction and “bounciness”. All bodies are of course part of a world and you could even have multiple worlds if you’d so desire. There are a host of other features available in Box2D that I won’t be covering in this tutorial. Have a look over the excellent Box2D manual to see what else is available.

Now the great part about Box2D is that it’s completely display agnostic. Its only job is to keep track of your bodies and ensure that everything interacts realistically. This also means it knows nothing of pixels, points or anything like that. Box2D simulates the “real world” and it’s units of measurement are miles, kilograms and seconds. It’s up to you to convert UIKit’s point based coordinates into something Box2D can handle. There’s a catch though: simulating physics on objects of arbitrary size is computationally intensive. To keep things simple Box2D is optimised to handle moving objects of sizes between 0.1 and 10 meters. So considering points and meters equal will not work out very well. Instead we’ll use an arbitrary scaling factor to keep bodies within reasonable size.

Let’s start by declaring a few functions for converting between points to meters and back:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
static const CGFloat kPointsToMeterRatio = 32.0;

float32 PointsToMeters(CGFloat points)
{
    return points / kPointsToMeterRatio;
}

CGFloat MetersToPoints(float32 meters)
{
    return meters * kPointsToMeterRatio;
}

b2Vec2 CGPointTob2Vec2(CGPoint point)
{
    float32 x = PointsToMeters(point.x);
    float32 y = PointsToMeters(point.y);
    return b2Vec2(x, y);
}

CGPoint b2Vec2ToCGPoint(b2Vec2 vector)
{
    CGFloat x = MetersToPoints(vector.x);
    CGFloat y = MetersToPoints(vector.y);
    return CGPointMake(x, y);
}

I also included here two convenience functions for converting between CGPoints and Box2D vectors and back.

Next, open up ViewController.m and replace its contents with the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
#import "ViewController.h"
#import "World.h"

#import <QuartzCore/QuartzCore.h>

@interface ViewController ()
{
    World *_world;
}

@end

@implementation ViewController

- (void)viewDidLoad
{
    [super viewDidLoad];

    _world = [[World alloc] initWithFrame:CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height)];

    [self generateCircles];
}

- (void)generateCircles
{
    CGSize viewSize = self.view.frame.size;
    CGFloat radius = 60;
    srand(time(NULL));
    for(int i = 0;i < 20;i++)
    {
        UIView *circleView = [[UIView alloc] initWithFrame:CGRectMake(rand() % (int)(viewSize.width - radius * 2) + radius,
                                                                      rand() % (int)(viewSize.height - radius * 2) + radius,
                                                                      radius, radius)];
        circleView.backgroundColor = [UIColor redColor];
        circleView.layer.cornerRadius = 30;
        [self.view addSubview:circleView];
        [_world addCircleWithView:circleView];
    }
}

@end

The view controller starts by initializing a world object that will handle the interaction with Box2D. The world object is initialized with a frame covering the entire screen. We then create 20 views at random positions on the screen and give them a corner radius to make them look like circles. Finally we pass them to the world object.

Now create a new class called World, do a #import <box2d/Box2D.h> at the top of World.m and add the following ivars to World in the implementation file (which at this point needs to be renamed to World.mm to work with Box2D which is a C++ library):

1
2
3
4
{
    b2World *_world;
    NSMutableArray *_circles;
}

Don’t worry right now about what these objects represent – I’ll talk again about them in the next bits of code. If you never mixed Objective-C and C++ code before, remember that it’s always worth isolating C++ code and imports of C++/Objective-C++ files in implementation files. This saves you the trouble of renaming .m files to .mm when you import your mixed language code and keeps your compile time as low as possible since Objective-C code compiles faster than Objective-C++ code.

Moving on, declare an initialization method initWithFrame:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
- (id)initWithFrame:(CGRect)frame
{
    self = [super init];
    if(self != nil)
    {
        b2Vec2 gravity(0.0f, 10.0f);
        _world = new b2World(gravity);
        _circles = [NSMutableArray array];

        [self createScreenBoundsForFrame:frame];
        [self setupAnimationLoop];
    }

    return self;
}

This code will setup a Box2D world object with gravity defined to be pointing upwards on the Y axis – meaning objects will appear to be falling from the top to the bottom of our screen. _circle is an array we’ll use to keep track of all the bodies we’ll be adding to our world. Next we’ll create the invisible borders of our screen in createScreenBoundsForFrame: to keep the views from falling off from the screen and finally setup an animation loop to drive everything in setupAniamtionLoop. Here’s the code for createScreenBoundsForFrame:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
- (void)createScreenBoundsForFrame:(CGRect)frame
{
    b2BodyDef screenBoundsDef;
    screenBoundsDef.position.Set(0.0f, 0.0f);
    b2Body *screenBounds = _world->CreateBody(&screenBoundsDef);

    b2Vec2 worldEdges[5];

    worldEdges[0] = b2Vec2(CGPointTob2Vec2(frame.origin));
    worldEdges[1] = b2Vec2(CGPointTob2Vec2(CGPointMake(CGRectGetMinX(frame), CGRectGetMaxY(frame))));
    worldEdges[2] = b2Vec2(CGPointTob2Vec2(CGPointMake(CGRectGetMaxX(frame), CGRectGetMaxY(frame))));
    worldEdges[3] = b2Vec2(CGPointTob2Vec2(CGPointMake(CGRectGetMaxX(frame), CGRectGetMinY(frame))));
    worldEdges[4] = b2Vec2(CGPointTob2Vec2(frame.origin));

    b2ChainShape worldShape;
    worldShape.CreateChain(worldEdges, 5);
    screenBounds->CreateFixture(&amp;worldShape, 0.0f);
}

Here you can see a screenBounds body object being created. Bodies are always instantiated using the Box2D world object that will also keep track and memory manage them. Also notice how the body is created from a body definition object. This allows multiple bodies to be instantiated using the same defining properties. Next we create a chain shape that spans the left, bottom, right and upper edge of the input frame. This is a special type of shape that doesn’t have a width and is used mostly for defining boundaries. Finally the shape is bounded to the body using a fixture.

Before looking at the animation loop let’s introduce the function that will map UIViews to Box2D bodies:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
- (void)addCircleWithView:(UIView*)view
{
    NSAssert(view.superview != nil, @"The view parameter is not part of a view hierarchy");

    b2BodyDef bodyDef;
    bodyDef.type = b2_dynamicBody;
    bodyDef.position = CGPointTob2Vec2(view.center);
    b2Body *circle = _world->CreateBody(&bodyDef);

    b2CircleShape shape;
    shape.m_radius = PointsToMeters(view.frame.size.width / 2);

    b2FixtureDef fixtureDef;
    fixtureDef.shape = &shape;
    fixtureDef.density = 1.0f;
    fixtureDef.friction = 0.3f;
    fixtureDef.restitution = 0.8f;

    circle->CreateFixture(&fixtureDef);
    // Associate the body with the passed in view
    circle->SetUserData((__bridge void*)view);

    [_circles addObject:[NSValue valueWithPointer:circle]];
}

Here we create a body with the same position as the view, we add it a circle shape that matches the size of the view and a few physical properties. Later on we’ll need to know what view each body represents so we save the pointer to the view in the body’s user data field.

Let’s move on to setting up the animation loop:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
- (void)setupAnimationLoop
{
    CADisplayLink *displayLink = [CADisplayLink displayLinkWithTarget:self selector:@selector(animationLoop:)];
    [displayLink addToRunLoop:[NSRunLoop mainRunLoop] forMode:NSDefaultRunLoopMode];
}

- (void)animationLoop:(CADisplayLink*)sender
{
    CFTimeInterval timeDelta = sender.duration * sender.frameInterval;

    int32 velocityIterations = 6;
    int32 positionIterations = 2;

    _world->Step(timeDelta, velocityIterations, positionIterations);

    // Adjust position for all views associated with circle bodies
    for(NSValue *circleWrapper in _circles)
    {
        b2Body *circle = (b2Body*)[circleWrapper pointerValue];
        UIView *view = (__bridge UIView*)circle->GetUserData();
        view.center = b2Vec2ToCGPoint(circle->GetPosition());
    }
}

Animations are always computed in discreet time intervals called frames. Animations get more accurate as frames get shorter but since the amount of time required to compute a frame stays relatively constant you take a performance hit when animating in shorter frames. Displays however operate on a fixed frame rate. In the code above, animationLoop: is in charge of animating and drawing a frame on the screen. CADisplayLink is in charge of deciding when the screen is ready to draw something new and will call animationLoop: to do the drawing. It’s entirely possible that animationLoop: would require more time to execute than a single screen refresh cycle, but what CADisplayLink ensures is that your animations stay as smooth as possible with the available resources and that no animation frames are being computed if the display is not ready to show them.

Let’s look at animationLoop: in more detail. First thing it does is figure out how much time has passed since the last run and passes that info to our world object which will then proceed to update its internal state. There are two extra parameters there: velocityIterations and positionIterations. These have more to do with Box2D’s inner workings. What happens is that Box2D approximates body velocity and position in multiple iterations. The more iterations you attempt, the more realistic the outcome. You can fine tune these values until you find something acceptable in terms of performance and realism.

Once Box2D does its thing, all we need to do is iterate over all circle bodies and update the position of their associated UIView to match the bodies’ new positions.

Finally, we need to take care of the cleanup. The only C++ object we allocated ourselves is the world object. Everything else was allocated internally by Box2D and will be cleaned up once the world object is destroyed:

1
2
3
4
- (void)dealloc
{
    delete _world;
}

That’s it! If you followed all the steps you should end up with something like this:

You can grab the full code for this tutorial from here: https://github.com/MihaiDamian/Box2DTutorial

So there you have it. Physics engines can be used very easily in a UIKit based app. In fact you could even go and build a game like Hundreds using nothing but UIKit and Box2D.