Code Sushi

tasty bits of code rolled and sliced by Mihai

Using UIViews as OpenGL Textures for Custom View Controller Transitions

In a previous post I’ve demonstrated how UIKit can be used with a physics engine. For this post I’m going to continue on the topic of mixing UIKit with technologies from the field of game development and show how UIViews can be used as textures in OpenGL.

As a practical example I’ve put together a sample app that simulates a UINavigationController where view controller transitions are animated using a cube instead of the standard slide animations. Go ahead and download the complete source code. You can see it in action in the video below:

Before we go any further I feel a disclaimer about the choice to use OpenGL in this example is needed. I have to admit it is entirely possible to construct simple 3D animations like this one using nothing but Core Animation – which being a higher level technology is easier to use. In fact CATransition already implements an undocumented “cube” animation identical with what I’ve built here from scratch. The only purpose of this example is to present the steps and challenges you need to go through in order to transform UIViews into OpenGL textures.

Drawing the view to an image

The function that makes this entire exercise possible is CALayer’s renderInContext: method. As the name suggests, this will draw a CALayer into a CGContext. By drawing into a CGBitmapContext (which is a subtype of CGContext). We can then grab the image data and pass it to OpenGL using the glTexImage2D function. One point to remember here is that neither Core Graphics nor OpenGL know anything about points – they only work in pixels. So whenever we transfer UIKit sizes and positions into Core Graphics or OpenGL we always need to take into account the content scale factor.

You’ll find the code implementing all this in the TextureAtlas class. Notice the code there actually draws two views in a single texture. The input views will be the views of the view controllers involved in the transition. The reason why we render the views side by side in the same texture is that shaders (in our case thinly wrapped by¬†GLKBaseEffect) normally draw a single texture in a draw call. In our case this trick simplifies the code a bit, but when rendering more complex scenes it also helps to improve performance if you manage to group up your drawing needs into fewer draw calls. This is because sending data to the graphics pipeline is usually the performance bottleneck and not the rendering itself which runs on optimized hardware.

Second think I want to point out here is the commented code I left in TextureAtlas. If you uncomment it you will see the texture saved as an image file in the application’s Documents folder. If you open up the file you will see it’s flipped on the y-axis. The happens because by default CGContext’s coordinate system defines the origin point to be in the lower-left corner. That may come as some surprise if you used CGContext in other places like in UIView’s drawRect: where the y-axis is actually flipped for your convenience to match the coordinate system of UIKit. But as OpenGL uses the same coordinate system as Core Graphics, no extra handling is needed in our case.

The right time to rasterize the views

Let’s take a look at the structure of the sample app. The root view controller is NavigationController, a custom view controller container that mimics the native UINavigationController. We have AnimationViewController, a wrapper for the transition animation that takes two views as parameters to kick off the animation. Also there are two dummy view controllers: FirstViewController and SecondViewController.

NavigationController is¬†initialized with an instance of FirstViewController, which immediately gets added as a child view controller to the navigation controller. Now let’s say we need to present an instance of SecondViewController, using the pushViewController: method. What the navigation controller should do here is add the SecondViewController as a child so we can grab it’s view for the animation, add an AnimationViewController as a child, remove the FirstViewController, wait for the animation to complete and finally remove the AnimationViewController.

This seems straightforward but there is one thing we need to take care of. Once the animation starts, the views from the animated controllers are rasterized to textures and no future updates to them will be visible until the animation is done. View layouts (either triggered by autolayout or autoresize masks) are performed on the main thread but asynchronously from view initialization. If we simply grab the view from the SecondViewController as soon as it’s instantiated, we might end up with a view that’s the wrong size. Instead we can use UIViewController’s transitionFromViewController:toViewController:duration:options:animations:completion: method to get us out of this problem. We can leverage the fact that view layout is done by the time this method’s animation block parameter gets called.

Putting it all together

Once we have the textures ready, what’s left is the fairly standard task of creating an OpenGL scene for the animation. Namely we’ll need to create a polygon mesh for a cube (complete with texture coordinates and surface normals for lighting), rotate the cube and call a delegate when the cube has completed 90 degrees of rotation. You can have a look at how this is implemented in the Cube and AnimationViewController classes. GLKit really helps simplify the effort here. It can take care of iOS specific things like setting up a UIView for OpenGL rendering, setting up an animation loop tied to the refresh rate of the device’s screen or pausing the animation loop when the app goes into background. You can also use it for out of the box shaders and for linear algebra tasks common in 3D applications.

Comments