Code Sushi

tasty bits of code rolled and sliced by Mihai

Using UIViews as OpenGL Textures for Custom View Controller Transitions

In a previous post I’ve demonstrated how UIKit can be used with a physics engine. For this post I’m going to continue on the topic of mixing UIKit with technologies from the field of game development and show how UIViews can be used as textures in OpenGL.

As a practical example I’ve put together a sample app that simulates a UINavigationController where view controller transitions are animated using a cube instead of the standard slide animations. Go ahead and download the complete source code. You can see it in action in the video below:

Before we go any further I feel a disclaimer about the choice to use OpenGL in this example is needed. I have to admit it is entirely possible to construct simple 3D animations like this one using nothing but Core Animation – which being a higher level technology is easier to use. In fact CATransition already implements an undocumented “cube” animation identical with what I’ve built here from scratch. The only purpose of this example is to present the steps and challenges you need to go through in order to transform UIViews into OpenGL textures.

Drawing the view to an image

The function that makes this entire exercise possible is CALayer’s renderInContext: method. As the name suggests, this will draw a CALayer into a CGContext. By drawing into a CGBitmapContext (which is a subtype of CGContext). We can then grab the image data and pass it to OpenGL using the glTexImage2D function. One point to remember here is that neither Core Graphics nor OpenGL know anything about points – they only work in pixels. So whenever we transfer UIKit sizes and positions into Core Graphics or OpenGL we always need to take into account the content scale factor.

You’ll find the code implementing all this in the TextureAtlas class. Notice the code there actually draws two views in a single texture. The input views will be the views of the view controllers involved in the transition. The reason why we render the views side by side in the same texture is that shaders (in our case thinly wrapped by GLKBaseEffect) normally draw a single texture in a draw call. In our case this trick simplifies the code a bit, but when rendering more complex scenes it also helps to improve performance if you manage to group up your drawing needs into fewer draw calls. This is because sending data to the graphics pipeline is usually the performance bottleneck and not the rendering itself which runs on optimized hardware.

Second think I want to point out here is the commented code I left in TextureAtlas. If you uncomment it you will see the texture saved as an image file in the application’s Documents folder. If you open up the file you will see it’s flipped on the y-axis. The happens because by default CGContext’s coordinate system defines the origin point to be in the lower-left corner. That may come as some surprise if you used CGContext in other places like in UIView’s drawRect: where the y-axis is actually flipped for your convenience to match the coordinate system of UIKit. But as OpenGL uses the same coordinate system as Core Graphics, no extra handling is needed in our case.

The right time to rasterize the views

Let’s take a look at the structure of the sample app. The root view controller is NavigationController, a custom view controller container that mimics the native UINavigationController. We have AnimationViewController, a wrapper for the transition animation that takes two views as parameters to kick off the animation. Also there are two dummy view controllers: FirstViewController and SecondViewController.

NavigationController is initialized with an instance of FirstViewController, which immediately gets added as a child view controller to the navigation controller. Now let’s say we need to present an instance of SecondViewController, using the pushViewController: method. What the navigation controller should do here is add the SecondViewController as a child so we can grab it’s view for the animation, add an AnimationViewController as a child, remove the FirstViewController, wait for the animation to complete and finally remove the AnimationViewController.

This seems straightforward but there is one thing we need to take care of. Once the animation starts, the views from the animated controllers are rasterized to textures and no future updates to them will be visible until the animation is done. View layouts (either triggered by autolayout or autoresize masks) are performed on the main thread but asynchronously from view initialization. If we simply grab the view from the SecondViewController as soon as it’s instantiated, we might end up with a view that’s the wrong size. Instead we can use UIViewController’s transitionFromViewController:toViewController:duration:options:animations:completion: method to get us out of this problem. We can leverage the fact that view layout is done by the time this method’s animation block parameter gets called.

Putting it all together

Once we have the textures ready, what’s left is the fairly standard task of creating an OpenGL scene for the animation. Namely we’ll need to create a polygon mesh for a cube (complete with texture coordinates and surface normals for lighting), rotate the cube and call a delegate when the cube has completed 90 degrees of rotation. You can have a look at how this is implemented in the Cube and AnimationViewController classes. GLKit really helps simplify the effort here. It can take care of iOS specific things like setting up a UIView for OpenGL rendering, setting up an animation loop tied to the refresh rate of the device’s screen or pausing the animation loop when the app goes into background. You can also use it for out of the box shaders and for linear algebra tasks common in 3D applications.

StackOverflow Tag Popularity by Country

StackExchange provides a great little tool called Data Explorer that lets any user run readonly SQL queries on their Q&A sites. I decided to put it to the test and use it to find how many questions got posted from each country for a set of tags. The location information of each question author is declared voluntarily in freeform format. This means the data collected is pretty noisy, if available at all. To map the raw location to a canonical country name I used MapQuest’s open geocoder – one of few such services not placing restrictions on the number of daily requests. Below is the resulting map showing the breakdown of questions by country and tag:

Using Physics in a UIKit Based Application

Some of the popular game engines available on iOS like cocos2d and Unity come bundled with physics engines so oftentimes the first thought when you want to add a bit of physics to your app is that you need to use one of the fancy game engines you’ve been hearing so much about. In this post I’ll walk you through using the Box2D physics engine without using any game engine or OpenGL.

Let’s start with a simple example. Create a single view iPad application project:

Next we need to setup Box2D. The easiest way to do this is via CocoaPods. CocoaPods greatly simplifies library dependency management for iOS and OS X projects. If you haven’t used it before you need to run the following commands in the terminal to set it up:

1
2
$ [sudo] gem install cocoapods
$ pod setup

Library dependencies are declared in a text file named Podfile which needs to be placed in the same directory as your xcodeproj file. Create the file and paste in the following:

1
2
platform :ios
pod 'box2d', '~> 2.3'

This declares you are targeting iOS and want to use the Box2D library using either version 2.3 or any other minor version up to 2.4 but not including 2.4. If you’re new to CocoaPods I encourage you to read about the other dependency declaration options available.

Final step, point your terminal to your Xcode project’s folder and run

1
pod install

This command will download the Box2D library and in addition create an Xcode workspace that contains your initial project file and a new Pods project where all your dependencies reside. From this point on you should always use the generated workspace instead of the initial project and you’ll be good to go.

Let’s move on to the good stuff and lay out some core Box2D concepts we’ll be using in this tutorial. Box2D simulates interaction between rigid bodies. Bodies have a position and a set of fixtures. Each fixture links a body with a shape and a few physical properties like mass, friction and “bounciness”. All bodies are of course part of a world and you could even have multiple worlds if you’d so desire. There are a host of other features available in Box2D that I won’t be covering in this tutorial. Have a look over the excellent Box2D manual to see what else is available.

Now the great part about Box2D is that it’s completely display agnostic. Its only job is to keep track of your bodies and ensure that everything interacts realistically. This also means it knows nothing of pixels, points or anything like that. Box2D simulates the “real world” and it’s units of measurement are miles, kilograms and seconds. It’s up to you to convert UIKit’s point based coordinates into something Box2D can handle. There’s a catch though: simulating physics on objects of arbitrary size is computationally intensive. To keep things simple Box2D is optimised to handle moving objects of sizes between 0.1 and 10 meters. So considering points and meters equal will not work out very well. Instead we’ll use an arbitrary scaling factor to keep bodies within reasonable size.

Let’s start by declaring a few functions for converting between points to meters and back:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
static const CGFloat kPointsToMeterRatio = 32.0;

float32 PointsToMeters(CGFloat points)
{
    return points / kPointsToMeterRatio;
}

CGFloat MetersToPoints(float32 meters)
{
    return meters * kPointsToMeterRatio;
}

b2Vec2 CGPointTob2Vec2(CGPoint point)
{
    float32 x = PointsToMeters(point.x);
    float32 y = PointsToMeters(point.y);
    return b2Vec2(x, y);
}

CGPoint b2Vec2ToCGPoint(b2Vec2 vector)
{
    CGFloat x = MetersToPoints(vector.x);
    CGFloat y = MetersToPoints(vector.y);
    return CGPointMake(x, y);
}

I also included here two convenience functions for converting between CGPoints and Box2D vectors and back.

Next, open up ViewController.m and replace its contents with the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
#import "ViewController.h"
#import "World.h"

#import <QuartzCore/QuartzCore.h>

@interface ViewController ()
{
    World *_world;
}

@end

@implementation ViewController

- (void)viewDidLoad
{
    [super viewDidLoad];

    _world = [[World alloc] initWithFrame:CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height)];

    [self generateCircles];
}

- (void)generateCircles
{
    CGSize viewSize = self.view.frame.size;
    CGFloat radius = 60;
    srand(time(NULL));
    for(int i = 0;i < 20;i++)
    {
        UIView *circleView = [[UIView alloc] initWithFrame:CGRectMake(rand() % (int)(viewSize.width - radius * 2) + radius,
                                                                      rand() % (int)(viewSize.height - radius * 2) + radius,
                                                                      radius, radius)];
        circleView.backgroundColor = [UIColor redColor];
        circleView.layer.cornerRadius = 30;
        [self.view addSubview:circleView];
        [_world addCircleWithView:circleView];
    }
}

@end

The view controller starts by initializing a world object that will handle the interaction with Box2D. The world object is initialized with a frame covering the entire screen. We then create 20 views at random positions on the screen and give them a corner radius to make them look like circles. Finally we pass them to the world object.

Now create a new class called World, do a #import <box2d/Box2D.h> at the top of World.m and add the following ivars to World in the implementation file (which at this point needs to be renamed to World.mm to work with Box2D which is a C++ library):

1
2
3
4
{
    b2World *_world;
    NSMutableArray *_circles;
}

Don’t worry right now about what these objects represent – I’ll talk again about them in the next bits of code. If you never mixed Objective-C and C++ code before, remember that it’s always worth isolating C++ code and imports of C++/Objective-C++ files in implementation files. This saves you the trouble of renaming .m files to .mm when you import your mixed language code and keeps your compile time as low as possible since Objective-C code compiles faster than Objective-C++ code.

Moving on, declare an initialization method initWithFrame:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
- (id)initWithFrame:(CGRect)frame
{
    self = [super init];
    if(self != nil)
    {
        b2Vec2 gravity(0.0f, 10.0f);
        _world = new b2World(gravity);
        _circles = [NSMutableArray array];

        [self createScreenBoundsForFrame:frame];
        [self setupAnimationLoop];
    }

    return self;
}

This code will setup a Box2D world object with gravity defined to be pointing upwards on the Y axis – meaning objects will appear to be falling from the top to the bottom of our screen. _circle is an array we’ll use to keep track of all the bodies we’ll be adding to our world. Next we’ll create the invisible borders of our screen in createScreenBoundsForFrame: to keep the views from falling off from the screen and finally setup an animation loop to drive everything in setupAniamtionLoop. Here’s the code for createScreenBoundsForFrame:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
- (void)createScreenBoundsForFrame:(CGRect)frame
{
    b2BodyDef screenBoundsDef;
    screenBoundsDef.position.Set(0.0f, 0.0f);
    b2Body *screenBounds = _world->CreateBody(&screenBoundsDef);

    b2Vec2 worldEdges[5];

    worldEdges[0] = b2Vec2(CGPointTob2Vec2(frame.origin));
    worldEdges[1] = b2Vec2(CGPointTob2Vec2(CGPointMake(CGRectGetMinX(frame), CGRectGetMaxY(frame))));
    worldEdges[2] = b2Vec2(CGPointTob2Vec2(CGPointMake(CGRectGetMaxX(frame), CGRectGetMaxY(frame))));
    worldEdges[3] = b2Vec2(CGPointTob2Vec2(CGPointMake(CGRectGetMaxX(frame), CGRectGetMinY(frame))));
    worldEdges[4] = b2Vec2(CGPointTob2Vec2(frame.origin));

    b2ChainShape worldShape;
    worldShape.CreateChain(worldEdges, 5);
    screenBounds->CreateFixture(&amp;worldShape, 0.0f);
}

Here you can see a screenBounds body object being created. Bodies are always instantiated using the Box2D world object that will also keep track and memory manage them. Also notice how the body is created from a body definition object. This allows multiple bodies to be instantiated using the same defining properties. Next we create a chain shape that spans the left, bottom, right and upper edge of the input frame. This is a special type of shape that doesn’t have a width and is used mostly for defining boundaries. Finally the shape is bounded to the body using a fixture.

Before looking at the animation loop let’s introduce the function that will map UIViews to Box2D bodies:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
- (void)addCircleWithView:(UIView*)view
{
    NSAssert(view.superview != nil, @"The view parameter is not part of a view hierarchy");

    b2BodyDef bodyDef;
    bodyDef.type = b2_dynamicBody;
    bodyDef.position = CGPointTob2Vec2(view.center);
    b2Body *circle = _world->CreateBody(&bodyDef);

    b2CircleShape shape;
    shape.m_radius = PointsToMeters(view.frame.size.width / 2);

    b2FixtureDef fixtureDef;
    fixtureDef.shape = &shape;
    fixtureDef.density = 1.0f;
    fixtureDef.friction = 0.3f;
    fixtureDef.restitution = 0.8f;

    circle->CreateFixture(&fixtureDef);
    // Associate the body with the passed in view
    circle->SetUserData((__bridge void*)view);

    [_circles addObject:[NSValue valueWithPointer:circle]];
}

Here we create a body with the same position as the view, we add it a circle shape that matches the size of the view and a few physical properties. Later on we’ll need to know what view each body represents so we save the pointer to the view in the body’s user data field.

Let’s move on to setting up the animation loop:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
- (void)setupAnimationLoop
{
    CADisplayLink *displayLink = [CADisplayLink displayLinkWithTarget:self selector:@selector(animationLoop:)];
    [displayLink addToRunLoop:[NSRunLoop mainRunLoop] forMode:NSDefaultRunLoopMode];
}

- (void)animationLoop:(CADisplayLink*)sender
{
    CFTimeInterval timeDelta = sender.duration * sender.frameInterval;

    int32 velocityIterations = 6;
    int32 positionIterations = 2;

    _world->Step(timeDelta, velocityIterations, positionIterations);

    // Adjust position for all views associated with circle bodies
    for(NSValue *circleWrapper in _circles)
    {
        b2Body *circle = (b2Body*)[circleWrapper pointerValue];
        UIView *view = (__bridge UIView*)circle->GetUserData();
        view.center = b2Vec2ToCGPoint(circle->GetPosition());
    }
}

Animations are always computed in discreet time intervals called frames. Animations get more accurate as frames get shorter but since the amount of time required to compute a frame stays relatively constant you take a performance hit when animating in shorter frames. Displays however operate on a fixed frame rate. In the code above, animationLoop: is in charge of animating and drawing a frame on the screen. CADisplayLink is in charge of deciding when the screen is ready to draw something new and will call animationLoop: to do the drawing. It’s entirely possible that animationLoop: would require more time to execute than a single screen refresh cycle, but what CADisplayLink ensures is that your animations stay as smooth as possible with the available resources and that no animation frames are being computed if the display is not ready to show them.

Let’s look at animationLoop: in more detail. First thing it does is figure out how much time has passed since the last run and passes that info to our world object which will then proceed to update its internal state. There are two extra parameters there: velocityIterations and positionIterations. These have more to do with Box2D’s inner workings. What happens is that Box2D approximates body velocity and position in multiple iterations. The more iterations you attempt, the more realistic the outcome. You can fine tune these values until you find something acceptable in terms of performance and realism.

Once Box2D does its thing, all we need to do is iterate over all circle bodies and update the position of their associated UIView to match the bodies’ new positions.

Finally, we need to take care of the cleanup. The only C++ object we allocated ourselves is the world object. Everything else was allocated internally by Box2D and will be cleaned up once the world object is destroyed:

1
2
3
4
- (void)dealloc
{
    delete _world;
}

That’s it! If you followed all the steps you should end up with something like this:

You can grab the full code for this tutorial from here: https://github.com/MihaiDamian/Box2DTutorial

So there you have it. Physics engines can be used very easily in a UIKit based app. In fact you could even go and build a game like Hundreds using nothing but UIKit and Box2D.