Ash Furrow – Teehan+Lax /blog We define and design custom experiences in the digital channel Tue, 13 Jan 2015 19:25:18 +0000 en-US hourly 1 https://wordpress.org/?v=4.8.1 Krush iOS Architecture /blog/krush-ios-architecture/ /blog/krush-ios-architecture/#respond Tue, 04 Feb 2014 15:32:41 +0000 /blog/?p=11470 At Teehan+Lax, we’ve been working on a project called Krush for several months now. Krush is an interesting application from an iOS architectural standpoint because it touches on a lot of common areas that iOS newcomers have questions about. Specifically, it’s a networked application that hits an API, has an on-disk cache, and presents interesting content. In this post, I’ll be exploring some case studies about aspects of the application: why we chose a certain methodology, how it worked out in practice, and what we would do in hindsight.

We launched Krush as an minimum viable product in 90 days, so the motivation behind “why” we chose certain methodologies was primarily based on speed: how quickly can we get to the minimum set of features and capabilities that are required to get something testable to market, and how fast can we iterate on it afterward? These motivations impacted the decisions we made, so you should look at our decisions through that lens if your motivations are different.

Case Study 1: The Network Layer

The network layer was primarily constructed by my talented colleague Brendan Lynch. The network layer is responsible for all outgoing connections from Krush, be they calls to the server’s API or to our CDN for asset delivery. Everything goes through a common interface.

Instead of using newer APIs like NSURLSession, we opted to use the more familiar network operations technology. Specifically, we used a request client that belonged to our app delegate that managed all network activity. This request client holds an NSOperationQueue where our network requests are queued.

The network requests themselves consist of a URL, parameters, and encoding specifications for OAuth. The requests objects know how to construct OAuth NSURLRequests, making replaying requests in the case of a failed connection request trivial. Network requests subclass NSOperation and conform to the NSURLConnectionDataDelegate protocol.

If the network request fails or times out, the request client will re-enqueue it automatically, up to a certain number of times, at which case it finally fails.

Every operation has a callback block. When an operation completes or fails, that block is invoked, passing along the data returned from the network and the result of the operation. The callback blocks, which are defined in the request client, transform that data into the on-disk cache, which we’ll cover in the next section.

This network architecture works well in practice. When a request does fail, it’s automatically restarted, so our application is very robust. By going with a familiar approach, instead of a newer iOS 7 API, we were able to get a product out the door faster.

If we had to do it all over, it might be worth investigating NSURLSession in order to reduce code effort and to take advantage of iOS 7’s background fetch API. I’d also want to explore using the idea of using commands sent from the view controller up the responder chain to the app delegate, which could then forward them to the request client. That way we could completely decouple our view controllers from having to know about the request client at all.

Case Study 2: On-Disk Cache

Krush is a very visual application – it downloads and displays a lot of images. Those images, once decompressed from JPEGs into bitmaps for display, take up a lot of memory. A lot. Holding the entire contents of the application in memory is not an option, and downloading each asset every time it is to be displayed would take up far too much of the user’s network resources. The solution was to use an on-disk cache.

For Readability, Brendan built an on-disk storage system using SQLite, which he was familiar with. However, he was busy building the network layer while I was building the on-disk cache, and my SQLite Fu is weak. Instead, I relied on what I was familiar with: Core Data.

Core Data isn’t an object persistence library per se, but rather an object graph management framework that just happens to be able to persist data to an on-disk store. We use it as a cache; the store is deleted with every launch of the application.

Application startup is one of the most crucial aspects of an application. If an application doesn’t get up and running in a reasonable amount of time, the user is going to give up on it. In the case of Krush, we were getting feedback from the users and the client that the application was slow at startup. Uh oh.

I opened Instruments and tested the application startup time on a device.

Oh boy were there a lot of network connections being made. In one trace, I measured 170 network requests when the app was first launched. It turned out that we were making lots of requests preemptively instead of on-demand. I changed our network requests to be less optimistic and more on-demand, which was an easy change to make. However, that change lead to a lot of interface jitteriness. Again, I measured.

We launched Krush using a very simple Core Data cache because we didn’t have a lot of time to invest in anything more complex. The stack consisted of a single managed object context on the main thread. I’ve never been a fan of prematurely optimizing a problem, anyway; I prefer a measure-adjust-measure cycle. When I measured for jitteriness in the interface, I saw the problem immediately: Core Data was blocking the main thread.

I did some research and decided to use a different approach. The request client instance would own a background context that would do work on its own queue; the background queue and main thread queue would share a single persistent store coordinator.

Let’s take a look at an example network request for the details of a user.

The user object already exists in the main managed object context, but not necessarily the background context. We have to save the main context, ensuring the object exists in the persistent store. Then we grab the objectId from the user and, in the callback block from the network request, grab the corresponding user object from the background context. Here, in a background thread, we perform our JSON parsing and form relationships between the background context user and other objects in the background context. Finally, we save the background context, which would fire a notification to merge our background context changes into the main context. The corresponding views would be updated via KVO. Phew!

The results were dramatic. We trimmed our launch time significantly and made the whole interface a lot more responsive.

Ideally, all of our changes would be made on the background managed object context. If I had to redo this solution, I would make main managed object context model instances read-only (semantically) and only perform changes on the background context. That way, I would eliminate having to save the main context before access objects in the background context.

The lesson learned here is to always measure your application before launching. It only took a few days to really shore up the interface and startup time. If we had invested those days before the launch, we could have had a much smoother experience out of the gates instead of in our iteration phase.

Case Study 3: User Profile View

The Krush user profile is a complex thing. It was important to get right both from a design perspective and from a code perspective. The design we envisioned has three tabs: Krushes, Influence, and Network.

More than that, though, the tabs need to be modular because, for a brand’s user page, we would want different tabs. It’s an interesting architectural problem; how does one structure the code in such a way that it can be reused in a modular fashion?

We could have used child view controllers, but I wanted to try something more data-driven. Instead, I used only one table view controlled by a single UITableViewController. That controller has a strong property to a datasource, which conforms to a protocol.

The data source changed when a different tab was selected. Furthermore, when the data source changed, the table view is reloaded. Now, when the controller was queried by the table view about what to display, it itself then queried the datasource.

The datasources were used to populate the tab selection control, which we wrote ourselves. Depending on if the user being displayed was a brand, different datasources were available. By using ReactiveCocoa, we were able to derive the datasource state of the view controller in viewDidLoad. Our table view controller itself is very light in logic, instead delegating layout concerns to the datasources.

Each datasource is responsible for supplying information like the number of rows or the height for any given row, and also for laying out individual cells. Each data source also has a class property and reuseIdentifier, which were used to register custom UITableViewCell subclasses with the table view in viewDidLoad. Finally, each data source was also responsible for exposing a ReactiveCocoa signal that would trigger a tableview reload.

This datasource approach worked well when designs changed in our iteration phase of the project. It also kept code clean and decoupled. One weakness of this approach is that, when aspects of the Network tab design were integrated into the Krushes tab design, there wasn’t an easy way to share that logic between the two different datasources. I wish that Objective-C had language-level support for an abstract class because that could have helped reduce code duplication between datasources objects.

Case Study 4: MVVM on the Feed

Early, pre-release versions of the application had a simple feed and a simple user onboarding tour. When we demoed it to colleagues around the office, the tour was identified as a weakness in the initial user experience. Geoff suggested integrating info cards into the feed on the first launch to show the user how to use the app. That way, they don’t have to memorize instructions from the tutorial before they can even use the app.

At that moment, our feed view controller was using an NSFetchedResultsController to display contents of our Core Data store. Instead of integrating logic for the new onboarding cards into our feed view controller, I explored an emerging pattern in Objective-C: Model-View-ViewModel.

In a nutshell, we abstracted all logic for presenting content in our view controller into a view model, which was agnostic to the actual UI. The view model would only provide information like whether or not the Endorse and Save buttons should be visible, or the image to use for the specific table view cell. We also moved the fetched results controller delegate code from the view controller into the view model, which would insert onboarding models into an internal that array it maintained.

The view model would also be notified when the user was about to reach the end of the feed so that more results could be fetched, or when the user pulled-to-refresh.

This approach worked well when we integrated hashtags into the application. The same view controller was used, just with a different view model, with different presentation logic. By making our different view models conform to a common protocol that the view controller can rely on, we were able to keep our controller agnostic of what it was presenting, and how it was presenting it.

I’m very happy with how this approach worked out for us. If I had to do it over, I’d try harder to reduce code duplication between the different view models. Again, an abstract class could help here.

Conclusion

This was an exciting project for us here at Teehan+Lax. We learned a lot throughout the duration of the project and had a lot of fun doing it. We hope that by sharing some of the lessons we learnt during the project, developers can make their own awesome apps. Go do great stuff!

]]>
/blog/krush-ios-architecture/feed/ 0
Model-View-ViewModel for iOS /blog/model-view-viewmodel-for-ios/ /blog/model-view-viewmodel-for-ios/#respond Tue, 14 Jan 2014 14:56:48 +0000 /blog/?p=11309 If you’ve been developing iOS applications for any length of time, you’ve probably heard of Model-View-Controller, or MVC. It’s your standard approach to building iOS apps.  Lately, however, I’ve been growing tired of some of MVC’s shortcomings. In this article, I’m going to go over what MVC is, detail its weaknesses, and tell you about a new way to structure your apps: Model-View-ViewModel. Get out your buzzword bingo cards, because we’re about to have a paradigm shift.

Model-View-Controller

Model-View-Controller is the definitive paradigm within which to structure your code. Apple even says so. Under MVC, all objects are classified as either a model, a view, or a controller. Models hold data, views present an interactive interface to the user, and view controllers mediate the interaction between the model and the view.

In our diagram, the view notifies the controller of any user interaction. The view controller then updates the model to reflect the change of state. That model then (typically through Key-Value-Observation) notifies any controllers of updates they need to perform on their views. This mediation makes up a lot of the application code written in iOS apps.

Model objects are typically very, very simple. Often times, they’re Core Data managed objects or, if you prefer to eschew Core Data, other popular model layers. According to Apple, models contain data and logic to manipulate that data. In practice, models are often very thin and, for better or worse, model logic gets shuffled into the controller.

Views (typically) are either UIKit components or programmer-defined collections of UIKit components. These are the pieces that go inside your .xib or Storyboard: the visual and interactable components of an app. Buttons. Labels. You get the idea. Views should never have direct references to models and should only have references to controllers through IBAction events. Business logic that doesn’t pertain to the view itself has no business being there.

That leaves us with controllers. Controllers are where the “glue code” of an app goes: the code that mediates all interactions between models and views. Controllers are responsible for managing the view hierarchy of the view they own. They respond to the view loading, appearing, disappearing, and so on. They also tend to get laden down with the model logic that we kept out of our model and the business logic we kept out of our views. That leads us to our first problem with MVC…

Massive View Controller

Because of the extraordinary amount of code that’s placed in view controllers, they tend to become rather bloated. It’s not unheard of in iOS to have view controllers that stretch to thousands and thousands of lines of code. These bulging pieces of your app weigh it down: massive view controllers are difficult to maintain (because of their sheer size), contain dozens of properties that make their state hard to manage, and conform to many protocols which mixes that protocol response code with controller logic.

Massive view controllers are difficult to test, either manually or with unit tests, because they have so many possible states. Breaking your code up into smaller, more bite-sized pieces is typically a very good thing. A recent story comes to mind.

Missing Network Logic

The definition of MVC – the one that Apple uses – states that all objects can be classified as either a model, a view, or a controller. All of ‘em. So where do you put network code? Where does the code to communicate with an API live?

You can try to be clever and put it in the model objects, but that can get tricky because network calls should be done asynchronously, so if a network request outlives the model that owns it, well, it gets complicated. You definitely should not put network code in the view, so that leaves… controllers. This is a bad idea, too, since it contributes to our Massive View Controller problem.

So where, then? MVC simply doesn’t have a place for code that doesn’t fit in within its three components.

Poor Testability

Another big problem with MVC is that it discourages developers from writing unit tests. Since view controllers mix view manipulation logic with business logic, separating out those components for the sake of unit testing becomes a herculean task. A task that many ignore in favour of… just not testing anything.

Fuzzy Definition of “Manage”

I mentioned earlier that view controllers manage a view hierarchy; view controllers have a “view” property, and may access any subviews of that view through IBOutlets. This doesn’t scale well when you have many outlets, and at some point, you’re probably better off using child view controllers to help manage all your subviews.

Where is that point? When does it become beneficial to break things down? Does the business logic to validate user input belong in the controller, or the model?

There are multiple fuzzy lines here that no one can quite seem to agree upon. It seems like no matter where you draw those lines, the view and corresponding controller become so tightly coupled, anyway, that you might as well treat them as one component.

Hey! Now there’s an idea …

Model-View-ViewModel

In an ideal world, MVC might work well. However, we live in the real world, and it does not. Now that we’ve detailed the ways that MVC breaks down with typical use, let’s take a look at an alternative: Model-View-ViewModel.

MVVM comes from Microsoft, but don’t hold that against it. MVVM is very similar to MVC. It formalizes the tightly coupled nature of the view and controller and introduces a new component.

Under MVVM, the view and view controller become formally connected; we treat them as one. Views still don’t have references to the model, but neither do controllers. Instead, they reference the view model.

The view model is an excellent place to put validation logic for user input, presentation logic for the view, kick-offs of network requests, and other miscellaneous code. The one thing that does not belong in the view model is any reference to the view itself. The logic in the view model should be just as applicable on iOS as it is on OS X. (In other words, don’t #import UIKit.h in your view models and you’ll be fine.)

Since presentation logic – like mapping a model value to a formatted string – belong in the view model, view controllers themselves become far, far less bloated. The best part is that when you’re starting off using MVVM, you can place only a little bit of logic in your view models, and migrate more of it over to them as you become more comfortable with the paradigm.

iOS apps written using MVVM are highly testable; since the view model contains all the presentation logic and doesn’t reference the view, it can be fully tested programmatically. The numerous hacks involved in testing Core Data models notwithstanding, apps written using MVVM can be fully unit tested.

The results of using MVVM, in my experience, is a slight increase in the total amount of code, but an overall decrease in code complexity. A worthwhile tradeoff.

If you look again at the MVVM diagram, you’ll notice that I’ve used the ambiguous verbs “notify” and “update”, but haven’t specified how to do that. You could use KVO, like with MVC, but that can quickly become unmanageable. In practice, using ReactiveCocoa is a great way to glue all the moving pieces together.

For more information on how to use MVVM in conjunction with ReactiveCocoa, read Colin Wheeler’s excellent write-up or check out an open source app I wrote. You can also read my book on ReactiveCocoa and MVVM.

]]>
/blog/model-view-viewmodel-for-ios/feed/ 0
“Best Work” /blog/best-work/ /blog/best-work/#respond Thu, 31 Oct 2013 14:03:14 +0000 /blog/?p=11168 I was asked today if I thought I was doing my best work. “Best work?”, I thought, “what does that even mean?”

Colosseum – certainly someone's best work.

I feel that, at Teehan+Lax, I’m encouraged to do the best work I’m capable of in the moment and to expand those capabilities so I can do even better work tomorrow. I’ve heard the phrase that if, as a programmer, if you’re not embarrassed by the code you wrote six months ago, then you’re not doing your job. I wouldn’t take that extreme of a stance, but it’s a worthwhile thought to explore.

I believe that it’s always necessary to stop and consider if the approach that I’m taking to a problem is the best way to solve that problem. Maybe there’s a better way, but I’m not aware of it because I haven’t taken the time to stop and think. Maybe there’s a better approach that I’m already aware of but eschewing because it takes too much time or effort.

Avoiding a better solution because it takes too much time? Is it a good idea to make subpar work for the sake of a deadline? If it’s acceptable to underperform for the sake of a deadline, then what other circumstances lead to the acceptability of shitty work? It’s a slippery slope that I’m not comfortable sliding down.


]]>
/blog/best-work/feed/ 0
GitHub Fundamentals /blog/github-fundamentals/ /blog/github-fundamentals/#respond Tue, 08 Oct 2013 13:59:51 +0000 /blog/?p=11028 Last week, I gave a talk at FITC SCREENS discussing GitHub for Designers. You can find the slides here. I thought it would be useful to distill the fundamentals – the most important part of my talk – down to a blog post. So here we are.

Git is a tool for managing files – and changes to them. GitHub is a web service built on top of git. This article is going to introduce you to some fundamentals of git and GitHub and show you how you can use these tools to collaborate with other members of your team.

Git manages any type of file, but it’s typically used on text files and images. Typically, we do not store more complex file formats like PSDs in git.

We use git for two main reasons. It helps us manage the changes that multiple team members are making to the same files and it helps us keep track of who made what changes and when. The benefits to using git are concrete, but it’s known for having a steep learning curve. However, learning git is a great first step to learning to code changes to the projects you’ve designed. This article isn’t going to dive into the command line or a specific tool, but rather teach you some high-level concepts that will help you regardless of what tool you’re using.

Branches

Branches are used to isolate changes to files. Imagine copying all your files to a new folder. Any changes you make are only made to the copies. Eventually, you pull those copies back into the original folder.

A branch is like a making that copy of all your files.

The “master” branch is the canonical branch – all new branches are created off of master.

When you create a new branch from master, it’s like you’re creating a new copy of all of your files. Changes made to those copies won’t be reflected back in the original folder.

Commits

When you’re working on a branch, it’s like you’re making changes to the copies of those files. When you’re finished a change, like adding some new CSS, then you should commit your changes. This “saves” a snapshot of the new files in your new directory, allowing you to revert back to this point in time later on. Until you commit, nothing is permanent, so it’s easy to make whacky, experimental changes to see if something will work.

When you commit, you only commit to your branch. You branch represents the series of commits since it diverged from master.

Here we have a branch created from master with two commits.

Local & Remote

This is the part of git that gets a little complicated. The master branch and your branches exist locally, but there’s also a copy of master in GitHub, the remote. This would be like having a shared file server in an office with a folder you keep up to sync.

When you make a new branch, all your changes are made locally, on your computer’s folder.

Likewise, your branch only exists on your local computer until you push that branch to the remote, which we’ll talk about next.

Pushing & Pulling

When you’re ready to show others your changes, you should push your branch, which includes your changes, to GitHub. This is analogous to copying your copied folder back to the shared file server.

Now your branch exists on the local and the remote. Once you push your changes to GitHub, create a Pull Request. This is a request to have your branch’s changes pulled into the master branch.

Once those changes are pulled into the remote master, your version of master is out of date (remember, we isolated those changes in your branch). We need to pull down from the remote to update our local copy. This is like re-copying the original files from the server.

This gets confusing because we’re using the term “pull” in two contexts. First, we want to pull our branch’s changes into the master branch. Second, we want to pull the changes made to master on the server to our local copy.

Git is a great tool for managing files in a team, but it is complicated to learn. The best way to become more familiar with it is just to dive in and use it.

]]>
/blog/github-fundamentals/feed/ 0
Custom UIViewController Transitions /blog/custom-uiviewcontroller-transitions/ /blog/custom-uiviewcontroller-transitions/#respond Tue, 24 Sep 2013 13:56:15 +0000 /blog/?p=10912 Update: As of iOS 7.0, interface transitions in landscape orientation are in a dire state. Read more about it. This article focuses on portrait-only transitions.

When teaching a new programming technique, there is a spectrum ranging from practice to theory. At one end, you teach only what you need to understand to implement a feature. At the other end, you teach the reasoning behind the API necessary to implement a feature. Too practical and you risk creating cargo-cult coders. Too theoretical and you risk alienating users of your API. It’s a tough balance to strike.

Of all of the new APIs introduced in iOS 7, perhaps the most confusing was the custom UIViewController transitions API. This is due mostly because the WWDC presentation leaned heavily toward the theoretical end of the spectrum. The problem is exacerbated by the lack of sample code illustrating how to use the custom view controller presentation API.

We’re here to fix that. The API itself isn’t that confusing – it just takes some experience getting your hands dirty. Let’s dive in.

Recall that UIViewController is the main unit of composition for application logic within iOS applications. View controllers are presented to users via navigation controllers, tab bar controllers, and modally. Before iOS 7, each of these presentations had predefined animations that were not customizable. Pushes onto a navigation controller’s stack moved from right to left. Selecting a different tab didn’t provide any animation. Modal presentations used one of a few pre-defined transitions (the default was a slide-up).

What’s more is that once a transition was complete, the presenting view controller was no longer at all visible (on the iPhone, at least). This made implementing custom modal views difficult.

iOS 7 introduces a new way to use a completely custom animation when transitioning from one view controller to another, whether it be a push onto a navigation controller stack, selecting a different tab, or a plain presentation. Additionally, the API allows you to present a view controller without necessarily obscuring the presenting controller. This makes faux popovers and alert views possible for the first time using UIViewControllers. Awesome!

A custom transition can either be interactive or non-interactive. We’re going to focus on the non-interactive type first because it’s a lot easier to implement. Remember that the goal of this article isn’t to explain the API – check out the WWDC video for a great explanation – the goal here is gain practical, hands-on experience.

Here’s the interaction we’re going to create. It’s nothing special – just a view appearing from the right edge of the screen. What is special is that we’re actually presenting a view controller, even though the presenting view controller remains visible.

So how do we accomplish this? The trick here is to create a new object called the animator that will be responsible for animating the presentation (and corresponding dismissal). When you present the view controller, set the modalPresentationStyle to UIModalPresentationCustom and set yourself as the transitionDelegate. Then implement the UIViewControllerTransitioningDelegate methods to vend the animator to the system.

You can do this in one of a few ways. We’re using a Storyboard file with a modal segue to the detail view controller, so we’ll implement the prepareForSegue:sender: method.

However, you could use the traditional presentation API if you’re not using Storyboards.

UIViewController *viewController = ...;
viewController.transitioningDelegate = self;
viewController.modalPresentationStyle = UIModalPresentationCustom;
[self presentViewController:viewController animated:YES completion:nil];

This code is for presenting a view controller modally. Similar techniques work with UINavigationControllers and UITabBarControllers. In those cases, simply comform to those class’ delegate protocols and implement the corresponding methods to vend an animator. In our examples, we’re going to use plain presentation methods.

Note that if you set the modal presentation style to custom, the system expects you to provide a non-nil transitioning delegate. You’ll receive a runtime warning if you don’t.

After defining the transitioningDelegate on the presented view controller, we’ll need to implement the UIViewControllerTransitioningDelegate methods to vend the animator.

- (id<UIViewControllerAnimatedTransitioning>)animationControllerForPresentedController:(UIViewController *)presented
                                                                  presentingController:(UIViewController *)presenting
                                                                      sourceController:(UIViewController *)source {
   
   TLTransitionAnimator *animator = [TLTransitionAnimator new];
   animator.presenting = YES;
   return animator;
}

- (id<UIViewControllerAnimatedTransitioning>)animationControllerForDismissedController:(UIViewController *)dismissed {
   TLTransitionAnimator *animator = [TLTransitionAnimator new];
   return animator;
}

That’s really all there is to it. With only a few lines of code, we’ve invoked a custom transition to a new view controller. This is awesome because the presenting view controller is completely unaware of how the presentation will take place – there is a clear separation of concerns. This is also awesome because we can reuse the animator elsewhere in our application for the same presentation logic.

So what’s in the animator? The animator is just an NSObject subclass that conforms to the UIViewControllerAnimatedTransitioning protocol. The two required methods of this protocol define how long the animation from one view controller to the other will take, and the code to actually animate that transition.

When the transition itself happens, the animator is passed a transition context that holds information about the transition. This includes the “from” and “to” view controllers and a container view. This container view is where the animation actually takes place. You add both view controllers’ views to the container view, perform some transition animation, then tell the context that the transition has completed. It’s that simple.

Our animator takes care of both a presentation and a dismissal (the property that is set in our UIViewControllerTransitioningDelegate methods). Let’s take a look at the complete implementation.

- (NSTimeInterval)transitionDuration:(id <UIViewControllerContextTransitioning>)transitionContext {
   return 0.5f;
}

- (void)animateTransition:(id <UIViewControllerContextTransitioning>)transitionContext {
   // Grab the from and to view controllers from the context
   UIViewController *fromViewController = [transitionContext viewControllerForKey:UITransitionContextFromViewControllerKey];
   UIViewController *toViewController = [transitionContext viewControllerForKey:UITransitionContextToViewControllerKey];
   
   // Set our ending frame. We'll modify this later if we have to
   CGRect endFrame = CGRectMake(80, 280, 160, 100);
   
   if (self.presenting) {
       fromViewController.view.userInteractionEnabled = NO;
       
       [transitionContext.containerView addSubview:fromViewController.view];
       [transitionContext.containerView addSubview:toViewController.view];
       
       CGRect startFrame = endFrame;
       startFrame.origin.x += 320;
       
       toViewController.view.frame = startFrame;
       
       [UIView animateWithDuration:[self transitionDuration:transitionContext] animations:^{
           fromViewController.view.tintAdjustmentMode = UIViewTintAdjustmentModeDimmed;
           toViewController.view.frame = endFrame;
       } completion:^(BOOL finished) {
           [transitionContext completeTransition:YES];
       }];
   }
   else {
       toViewController.view.userInteractionEnabled = YES;
       
       [transitionContext.containerView addSubview:toViewController.view];
       [transitionContext.containerView addSubview:fromViewController.view];
       
       endFrame.origin.x += 320;
       
       [UIView animateWithDuration:[self transitionDuration:transitionContext] animations:^{
           toViewController.view.tintAdjustmentMode = UIViewTintAdjustmentModeAutomatic;
           fromViewController.view.frame = endFrame;
       } completion:^(BOOL finished) {
           [transitionContext completeTransition:YES];
       }];
   }
}

The first method is very straightforward – how long should the transition take? The next method is a little trickier. It’s passed the transition context then, depending on whether it’s presenting or dismissing, performs an animation to present or dismiss the detail view controller.

We’re using plain ol’ UIView block-based animations here. Nothing fancy. The only tricky thing is that the “to” and “from” view controllers change depending on whether you’re presenting or dismissing. That is to say, the presenting view controller is the “from” controller when presenting and the “to” controller when dismissing.

As you can see, it’s not a lot of code to implement a custom transition. Let’s take a look at a more complicated example: an interactive transition. These are trickier for a few reasons. First, you’ll usually want a way to present an interactive transition non-interactively, as well as dismiss it non-interactively. This lets users choose how they want to present or dismiss the content in the view controller. Additionally, the interactivity is tied to a gesture recognizer. Where does the code go to respond to that recognizer?

The answer is to subclass UIPercentDrivenInteractiveTransition, make it the animator, the transitioning delegate, and the gesture recognizer target. This is going to bundle all of your transitioning logic into one place. There’s a lot to unwind here, so let’s take it one step at a time.

First, our interactor is going to be initialized with a parent view controller. This is because the interactor itself is going to be responsible for presenting the new view controller in the gesture recognizer callback method.

-(id)initWithParentViewController:(UIViewController *)viewController {
   if (!(self = [super init])) return nil;
   
   _parentViewController = viewController;
   
   return self;
}

The interactor is the target of a screen edge pan gesture recognizer which we’ll set up in our presenting view controller’s viewDidLoad.

UIScreenEdgePanGestureRecognizer *gestureRecognizer = [[UIScreenEdgePanGestureRecognizer alloc] initWithTarget:self.menuInteractor action:@selector(userDidPan:)];
gestureRecognizer.edges = UIRectEdgeLeft;
[self.view addGestureRecognizer:gestureRecognizer];

Our userDidPan: method looks like the following.

-(void)userDidPan:(UIScreenEdgePanGestureRecognizer *)recognizer {
   CGPoint location = [recognizer locationInView:self.parentViewController.view];
   CGPoint velocity = [recognizer velocityInView:self.parentViewController.view];
   
   if (recognizer.state == UIGestureRecognizerStateBegan) {
       // We're being invoked via a gesture recognizer – we are necessarily interactive
       self.interactive = YES;
       
       // The side of the screen we're panning from determines whether this is a presentation (left) or dismissal (right)
       if (location.x < CGRectGetMidX(recognizer.view.bounds)) {
           self.presenting = YES;
           TLMenuViewController *viewController = [[TLMenuViewController alloc] initWithPanTarget:self];
           viewController.modalPresentationStyle = UIModalPresentationCustom;
           viewController.transitioningDelegate = self;
           [self.parentViewController presentViewController:viewController animated:YES completion:nil];
       }
       else {
           [self.parentViewController dismissViewControllerAnimated:YES completion:nil];
       }
   }
   else if (recognizer.state == UIGestureRecognizerStateChanged) {
       // Determine our ratio between the left edge and the right edge. This means our dismissal will go from 1...0.
       CGFloat ratio = location.x / CGRectGetWidth(self.parentViewController.view.bounds);
       [self updateInteractiveTransition:ratio];
   }
   else if (recognizer.state == UIGestureRecognizerStateEnded) {
       // Depending on our state and the velocity, determine whether to cancel or complete the transition.
       if (self.presenting) {
           if (velocity.x > 0) {
               [self finishInteractiveTransition];
           }
           else {
               [self cancelInteractiveTransition];
           }
       }
       else {
           if (velocity.x < 0) {
               [self finishInteractiveTransition];
           }
           else {
               [self cancelInteractiveTransition];
           }
       }
   }
}

Quite a lot there. Don’t worry, we’re going to go through it all. The most important thing to note is that the gesture recognizer code does not implement any animation code.

When our recognizer begins, we present (or dismiss) the view controller. When the recognizer changes, we update the percent complete on self. Finally, when the recognizer finishes, we decided whether to complete or cancel the transition depending on the last direction of the gesture recognizer. It’s a lot of code, but it’s all fairly straightforward.

Notice that when we present the new view controller, we set self as the transition delegate. When prompted to vend an animator, we’ll also return self.

- (id <UIViewControllerAnimatedTransitioning>)animationControllerForPresentedController:(UIViewController *)presented presentingController:(UIViewController *)presenting sourceController:(UIViewController *)source {
   return self;
}

- (id <UIViewControllerAnimatedTransitioning>)animationControllerForDismissedController:(UIViewController *)dismissed {
   return self;
}

There are two more methods to provide an interactor for the interactive transition. These methods are called after the previous methods. We’re going to return self if we’re interactive and nil if we’re not.

- (id <UIViewControllerInteractiveTransitioning>)interactionControllerForPresentation:(id <UIViewControllerAnimatedTransitioning>)animator {
   if (self.interactive) {
       return self;
   }
   
   return nil;
}

- (id <UIViewControllerInteractiveTransitioning>)interactionControllerForDismissal:(id <UIViewControllerAnimatedTransitioning>)animator {
   if (self.interactive) {
       return self;
   }
   
   return nil;
}

The next methods are copied almost directly from the first example. We need to provide animations to non-interactively present and dismiss the view controller.

What’s really interesting is the interactor method in the UIViewControllerInteractiveTransitioning protocol. We’ll implement this method to begin our interactive transition. Then we’ll override the UIPercentDrivenInteractiveTransition methods to update our transition, then finally to complete or cancel the transition.

The startInteractiveTransition: method sets up the container view with the “to” and “from” view controllers’ views. It’s important which order you add these in so the correct view is “on top.”

-(void)startInteractiveTransition:(id<UIViewControllerContextTransitioning>)transitionContext {
   self.transitionContext = transitionContext;
   
   UIViewController *fromViewController = [transitionContext viewControllerForKey:UITransitionContextFromViewControllerKey];
   UIViewController *toViewController = [transitionContext viewControllerForKey:UITransitionContextToViewControllerKey];
   
   CGRect frame = [[transitionContext containerView] bounds];
   
   if (self.presenting)
   {
       // The order of these matters – determines the view hierarchy order.
       [transitionContext.containerView addSubview:fromViewController.view];
       [transitionContext.containerView addSubview:toViewController.view];
       
       frame.origin.x -= CGRectGetWidth([[transitionContext containerView] bounds]);
   }
   else {
       [transitionContext.containerView addSubview:toViewController.view];
       [transitionContext.containerView addSubview:fromViewController.view];
   }
   
   toViewController.view.frame = frame;
}

Next we need to update the position of the menu view controller depending on the transition’s percent complete.

- (void)updateInteractiveTransition:(CGFloat)percentComplete {
   id<UIViewControllerContextTransitioning> transitionContext = self.transitionContext;
   
   UIViewController *fromViewController = [transitionContext viewControllerForKey:UITransitionContextFromViewControllerKey];
   UIViewController *toViewController = [transitionContext viewControllerForKey:UITransitionContextToViewControllerKey];
   
   // Presenting goes from 0...1 and dismissing goes from 1...0
   CGRect frame = CGRectOffset([[transitionContext containerView] bounds], -CGRectGetWidth([[transitionContext containerView] bounds]) * (1.0f - percentComplete), 0);
   
   if (self.presenting)
   {
       toViewController.view.frame = frame;
   }
   else {
       fromViewController.view.frame = frame;
   }
}

Finally, the code to complete or cancel the transition is below. It’s critically important that no matter what, completeTransition: is called on the transition context that was passed in startInteractivetransition:. We’ll call this method in the completion block of our animation.

- (void)finishInteractiveTransition {
   id<UIViewControllerContextTransitioning> transitionContext = self.transitionContext;
   
   UIViewController *fromViewController = [transitionContext viewControllerForKey:UITransitionContextFromViewControllerKey];
   UIViewController *toViewController = [transitionContext viewControllerForKey:UITransitionContextToViewControllerKey];
   
   if (self.presenting)
   {
       CGRect endFrame = [[transitionContext containerView] bounds];
       
       [UIView animateWithDuration:0.5f animations:^{
           toViewController.view.frame = endFrame;
       } completion:^(BOOL finished) {
           [transitionContext completeTransition:YES];
       }];
   }
   else {
       CGRect endFrame = CGRectOffset([[transitionContext containerView] bounds], -CGRectGetWidth([[self.transitionContext containerView] bounds]), 0);
       
       [UIView animateWithDuration:0.5f animations:^{
           fromViewController.view.frame = endFrame;
       } completion:^(BOOL finished) {
           [transitionContext completeTransition:YES];
       }];
   }
}

- (void)cancelInteractiveTransition {
   id<UIViewControllerContextTransitioning> transitionContext = self.transitionContext;
   
   UIViewController *fromViewController = [transitionContext viewControllerForKey:UITransitionContextFromViewControllerKey];
   UIViewController *toViewController = [transitionContext viewControllerForKey:UITransitionContextToViewControllerKey];
  
   if (self.presenting)
   {
       CGRect endFrame = CGRectOffset([[transitionContext containerView] bounds], -CGRectGetWidth([[transitionContext containerView] bounds]), 0);
       
       [UIView animateWithDuration:0.5f animations:^{
           toViewController.view.frame = endFrame;
       } completion:^(BOOL finished) {
           [transitionContext completeTransition:NO];
       }];
   }
   else {
       CGRect endFrame = [[transitionContext containerView] bounds];
       
       [UIView animateWithDuration:0.5f animations:^{
           fromViewController.view.frame = endFrame;
       } completion:^(BOOL finished) {
           [transitionContext completeTransition:NO];
       }];
   }
}

We’ve driven this transition completely using plain, boring UIView block animations. What’d be super-cool is to use the new UIKit Dynamics to drive the animations. That’s beyond the scope of this tutorial, but you’ll find the code for it in the TLMenuDynamicInteractor. Just set the USE_UIKIT_DYNAMICS C macro to YES to use the dynamic interactor instead.

The most important thing to note about the dynamic version of the interactor is that, unlike our last post’s example of driving an attachment behaviour with a gesture recognizer, the gesture recognizer callback method does not touch the attachment behaviour directly.

All of the sample code we’ve discussed is openly available on GitHub. Check it out and let us know what you think.

As we’ve seen, custom UIViewController transitions are not difficult. You’re now armed to make super-awesome animations and we’re excited to see what you come up with.

]]>
/blog/custom-uiviewcontroller-transitions/feed/ 0
Implementing a Bouncy UICollectionViewLayout with UIKit Dynamics /blog/implementing-a-bouncy-uicollectionviewlayout-with-uikit-dynamics/ /blog/implementing-a-bouncy-uicollectionviewlayout-with-uikit-dynamics/#respond Mon, 23 Sep 2013 14:10:30 +0000 /blog/?p=10906 We’ve previously discussed using UIKit Dynamics to make realistic-feeling interfaces by applying the physics simulation to instances of UIView in our interface. In that article, we mentioned that a UIView is only one example of a concrete implementation of the UIDynamicItem protocol, alluding to the fact that another class conforms to the protocol. That other class is UICollectionViewLayoutAttributes.

Today we’re going to explore how to make a “bouncy” UICollectionView. An important distinction between our code and the code discussed in the WWDC videos is that ours will scale. Apple’s implementation “cheated” by using a very simple implementation that broke down after a few hundred cells. This collection view, in contrast, contains ten thousand rows and scrolls like butter:

Let’s start from the top. We’ll need a UICollectionViewController set up so we can display some cells.

Ten thousand cells … that’s quite a lot! Don’t worry – we’ll be fine because we’re going to use a tiling technique. More on that later.

The next step is to define our collection view layout, called TLSpringFlowLayout. We’ll use some properties to keep track of a few things.

@interface TLSpringFlowLayout ()

@property (nonatomic, strong) UIDynamicAnimator *dynamicAnimator;

@property (nonatomic, strong) NSMutableSet *visibleIndexPathsSet;
@property (nonatomic, assign) CGFloat latestDelta;

@end

The dynamic animator is used to actually run the physics simulation that powers our bounciness. The second group of properties are used for our tiling – I’ll describe them later.

We’ll use our initializer to set up some properties of our collection view, as well as create our dynamic animator and our visibleIndexPathsSet.

-(id)init {
   if (!(self = [super init])) return nil;
   
   self.minimumInteritemSpacing = 10;
   self.minimumLineSpacing = 10;
   self.itemSize = CGSizeMake(300, 44);
   self.sectionInset = UIEdgeInsetsMake(20, 10, 10, 10);
   
   self.dynamicAnimator = [[UIDynamicAnimator alloc] initWithCollectionViewLayout:self];
   self.visibleIndexPathsSet = [NSMutableSet set];
   
   return self;
}

Nice and easy! Next, we’ll need to add our dynamic behaviours to the dynamic animator. We’ll do this in prepareLayout. The prepareLayout method is called a lot – the dynamic animator takes care of invalidating the layout whenever the simulation state changes, causing the layout to re-prepare itself. UICollection view and UIKit Dynamics are really well-designed to work together.

In our prepareLayout method, we’re going to call our super implementation first (this is very important). Then we’re going to calculate a CGRect that represents the visible area, plus a little extra room. We need this extra room to provide some “breathing room” in case the collection view is scrolling faster than the dynamic animator can keep up. I used 100 points that was determined experimentally. After we have calculated our visible rect, we grab the layout attributes inside that rect from our super class.

[super prepareLayout];

CGRect visibleRect = CGRectInset((CGRect){.origin = self.collectionView.bounds.origin, .size = self.collectionView.frame.size}, -100, -100);

NSArray *itemsInVisibleRectArray = [super layoutAttributesForElementsInRect:visibleRect];

Next, we grab the index paths for the items that are visible. We’ll put them in a set so we can efficiently check for inclusion within that set later on.

The trick for tiling now is to do things in two steps:

  1. Remove any behaviours in the dynamic animator that represent items whose index path is no longer visible.
  2. Add new behaviours for items whose index paths are just becoming visible.

The first step is accomplished with the following code.

NSArray *noLongerVisibleBehaviours = [self.dynamicAnimator.behaviors filteredArrayUsingPredicate:[NSPredicate predicateWithBlock:^BOOL(UIAttachmentBehavior *behaviour, NSDictionary *bindings) {
   BOOL currentlyVisible = [itemsIndexPathsInVisibleRectSet member:[[[behaviour items] lastObject] indexPath]] != nil;
   return !currentlyVisible;
}]];

[noLongerVisibleBehaviours enumerateObjectsUsingBlock:^(id obj, NSUInteger index, BOOL *stop) {
   [self.dynamicAnimator removeBehavior:obj];
   [self.visibleIndexPathsSet removeObject:[[[obj items] lastObject] indexPath]];
}];

The first line creates an array of behaviours that are no longer visible by relying on our set of currently visible index paths. Then we iterate over those behaviours and remove them from both the dynamic animator and the visible index paths set.

The second step is done using the following code.

NSArray *newlyVisibleItems = [itemsInVisibleRectArray filteredArrayUsingPredicate:[NSPredicate predicateWithBlock:^BOOL(UICollectionViewLayoutAttributes *item, NSDictionary *bindings) {
   BOOL currentlyVisible = [self.visibleIndexPathsSet member:item.indexPath] != nil;
   return !currentlyVisible;
}]];

CGPoint touchLocation = [self.collectionView.panGestureRecognizer locationInView:self.collectionView];

[newlyVisibleItems enumerateObjectsUsingBlock:^(UICollectionViewLayoutAttributes *item, NSUInteger idx, BOOL *stop) {
   CGPoint center = item.center;
   UIAttachmentBehavior *springBehaviour = [[UIAttachmentBehavior alloc] initWithItem:item attachedToAnchor:center];
   
   springBehaviour.length = 0.0f;
   springBehaviour.damping = 0.8f;
   springBehaviour.frequency = 1.0f;
   
   if (!CGPointEqualToPoint(CGPointZero, touchLocation)) {
       CGFloat distanceFromTouch = fabsf(touchLocation.y - springBehaviour.anchorPoint.y);
       CGFloat scrollResistance = distanceFromTouch / 1500.0f;
       
       if (self.latestDelta < 0) {
           center.y += MAX(self.latestDelta, self.latestDelta*scrollResistance);
       }
       else {
           center.y += MIN(self.latestDelta, self.latestDelta*scrollResistance);
       }
       item.center = center;
   }
   
   [self.dynamicAnimator addBehavior:springBehaviour];
   [self.visibleIndexPathsSet addObject:item.indexPath];
}];

First, we calculate the newly visible UICollectionViewLayoutAttributes (which will be the dynamic items in our dynamic animator’s behaviours). We grab the touch location of our collection view’s pan gesture recognizer (we’ll use this shortly). Then we enumerate over the newly visible items and add a new attachment behaviour for each layout attributes item. If the touchLocation is not (0, 0), that indicates that the user is touching the screen and we need to create the dynamic behaviour “in flight.” We modify the centre point of the UICollectionViewLayoutAttributes item so that it’s pulled by the springBehaviour. The math here is explained later when we discuss our shouldInvalidateLayoutForBoundsChange: method.

Finally, we add our behaviour to our dynamic animator and its item’s index path to our visible index paths set. The tiling part of our collection view layout is now complete.

The remainder is largely boilerplate. We want the collection view layout to return the items specified by the dynamic animator instead of the regular superclass implementation, so we need to override the following two methods.

-(NSArray *)layoutAttributesForElementsInRect:(CGRect)rect {
   return [self.dynamicAnimator itemsInRect:rect];
}

-(UICollectionViewLayoutAttributes *)layoutAttributesForItemAtIndexPath:(NSIndexPath *)indexPath {
   return [self.dynamicAnimator layoutAttributesForCellAtIndexPath:indexPath];
}

These methods represent another way that UIDynamicAnimator and UICollectionView were designed to work together.

Next, when the collection view scrolls, we need to “pull” the collection view layout attributes items. To do so, we’ll calculate how much the collection view has been scrolled by – the delta – and store it in our latestDelta property to be used by the prepareLayout method. Then we grab our touchLocation to figure out where the user’s finger is located. We want cells closer to the finger to move immediately while cells further away to lag behind. To accomplish this, we’re going to enumerate over each of the dynamic behaviours in our dynamic animator, calculate the distance of that behaviour’s item from the touch point, and scale the amount of change in our collection view layout attributes item appropriately.

-(BOOL)shouldInvalidateLayoutForBoundsChange:(CGRect)newBounds {
   UIScrollView *scrollView = self.collectionView;
   CGFloat delta = newBounds.origin.y - scrollView.bounds.origin.y;
   
   self.latestDelta = delta;
   
   CGPoint touchLocation = [self.collectionView.panGestureRecognizer locationInView:self.collectionView];
   
   [self.dynamicAnimator.behaviors enumerateObjectsUsingBlock:^(UIAttachmentBehavior *springBehaviour, NSUInteger idx, BOOL *stop) {
       CGFloat distanceFromTouch = fabsf(touchLocation.y - springBehaviour.anchorPoint.y);
       CGFloat scrollResistance = distanceFromTouch / 1500.0f;
       
       UICollectionViewLayoutAttributes *item = [springBehaviour.items firstObject];
       CGPoint center = item.center;
       if (delta < 0) {
           center.y += MAX(delta, delta*scrollResistance);
       }
       else {
           center.y += MIN(delta, delta*scrollResistance);
       }
       item.center = center;
       
       [self.dynamicAnimator updateItemUsingCurrentState:item];
   }];
   
   return NO;
}

We cap our movements with MIN or MAX so that far-away cells don’t begin moving in the opposite of the intended direction. The denominator of our scrollResistance determines how “bouncy” our collection view becomes. The smaller the denominator, the bouncier.

For each UICollectionViewLayoutAttributes item, we need to let the dynamic animator know that its position has changed to that it will take that into account within its simulation (that is to say, changes made manually to UIDynamicItem objects aren’t automatically propagated to the dynamic animator). Finally, we return NO from this method – the dynamic animator will take care of invalidating the layout for us.

That’s all there is to it. With under 150 lines of code, you can have efficient, bouncy collection view layouts. We’re really excited to see what other developers do by combining UICollectionView and UIDynamicAnimator.

]]>
/blog/implementing-a-bouncy-uicollectionviewlayout-with-uikit-dynamics/feed/ 0
Introduction to UIKit Dynamics /blog/introduction-to-uikit-dynamics/ /blog/introduction-to-uikit-dynamics/#respond Fri, 20 Sep 2013 13:46:42 +0000 /blog/?p=10896 iOS 7 is a real conundrum. It juxtaposes its smooth, platonic interface elements with the physical realism of making those elements respond realistically to user interaction. We already covered UIMotionEffects, which adjust the appearance of an interface to the way the user is holding a device. Today, we’re going to cover realistic animations using UIKit Dynamics.

In order to create truly realistic animations on iOS 6 and prior, it was necessary to have a deep understanding of math, physics, and the Core Animation library. Not anymore.

UIKit Dynamics are a new way to animate interfaces with realistic effects. They’re realistic because they’re powered by an underlying two-dimensional physics engine, but UIKit Dynamics does not require knowledge of the physics implementation to create stunning animations.

This article is going to take you through the fundamentals you need to be aware of when using UIKit Dynamics before moving on to a fun demonstration of their power. This is not meant to be a comprehensive guide, but rather an overview to get you ready to work with UIKit Dynamics in your own code. The possibilities are endless and we just can’t cover everything here.

The core component of UIKit Dynamics is the UIDynamicAnimator. This object wraps an underlying physics engine. By itself, a dynamic animator doesn’t do anything. You’ll need to add behaviours to it. These behaviours interact within the physics engine.

A UIKit Dynamics behaviour is the core unit of composition for a UIKit Dynamics animation. These behaviours define how their UIDynamicItems interact with the physics simulation. But what’s a UIDynamicItem?

UIDynamicItem is a protocol that defines a centre, a bounds, and a transform (only two-dimensional transforms are used). UIView conforms to this protocol, and is the most common use of UIDynamicBehaviour. You can also use UICollectionViewLayoutAttributes with UIDynamicBehaviours, but we’re not going to cover that today.

The process to run a UIKit Dynamics animation is:

  1. Create a UIDynamicAnimator and store it in a strong property (you are responsible for retaining it).
  2. Create one or more UIDynamicBehaviors. Each behaviour should have one or more items, typically a view to animate.
  3. Make sure that the initial state of the items used in the UIDynamicBehaviors is a valid state within the UIDynamicAnimator simulation.
  4. ???
  5. Profit!

Seriously, it’s that easy. Set up your animator, add your behaviours, sit back, and enjoy some sweet, sweet physics. It’s so easy because UIKit Dynamics provide a declarative interface. It abstracts having to worry about the underlying physics and lets you worry only about the intentions of the animation.

Let’s look at a simple example. We want to animate a view “dropping” due to the force of gravity, then colliding with the bottom of its superview.

That’s pretty cool! Let’s see the code.

self.animator = [[UIDynamicAnimator alloc] initWithReferenceView:self.view];

UIGravityBehavior* gravityBehavior = [[UIGravityBehavior alloc] initWithItems:@[self.redSquare]];
[self.animator addBehavior:gravityBehavior];

UICollisionBehavior* collisionBehavior = [[UICollisionBehavior alloc] initWithItems:@[self.redSquare]];
collisionBehavior.translatesReferenceBoundsIntoBoundary = YES;
[self.animator addBehavior:collisionBehavior];

That’s it! You can see the code is very straightforward. We create our dynamic animator, then create our gravity and collision behaviours, and that’s it! The dynamic animator will take over and simulate the fall.

What if we want to make our red square “bouncier”? Well, UIKit Dynamics provides a way to access the low-level physics properties of our dynamic items.

As you can see, after the red square hits the bottom of its superview, it bounces several times more than before. That’s because we’ve modified its elasticity within the physics simulator. All we need to add to the previous code is the following:

UIDynamicItemBehavior *elasticityBehavior = [[UIDynamicItemBehavior alloc] initWithItems:@[self.redSquare]];
elasticityBehavior.elasticity = 0.7f;
[self.animator addBehavior:elasticityBehavior];

Dynamic Behaviours are also composable. You can subclass UIDynamicBehavior yourself and add your own behaviours. This way, you can abstract your dynamic behaviour combinations. Let’s take a look at an example to see what I mean.

Consider the above extra-bouncy square. It uses three dynamic behaviours. Let’s combine those into a single UIDynamicBehavior subclass. The initializer looks like the following:

-(instancetype)initWithItems:(NSArray *)items {
   if (!(self = [super init])) return nil;
   
   UIGravityBehavior* gravityBehavior = [[UIGravityBehavior alloc] initWithItems:items];
   [self addChildBehavior:gravityBehavior];
   
   UICollisionBehavior* collisionBehavior = [[UICollisionBehavior alloc] initWithItems:items];
   collisionBehavior.translatesReferenceBoundsIntoBoundary = YES;
   [self addChildBehavior:collisionBehavior];
   
   UIDynamicItemBehavior *elasticityBehavior = [[UIDynamicItemBehavior alloc] initWithItems:items];
   elasticityBehavior.elasticity = 0.7f;
   [self addChildBehavior:elasticityBehavior];
   
   return self;
}

Now we can re-use this behaviour wherever we want. Our previous example’s code is now reduced to the following:

Nice! See how abstracting our behaviours leads to less code and a clear separation of concerns? The best part is that the underlying physics engine is just as fast, meaning there’s no overhead introduced by this abstraction.

Let’s take a look at another example. We’re going to recreate UIAlertView with our own transition animations, driven by UIKit Dynamics. The purpose of this demo is only to show off dynamics and not to make a robust UIAlertView replacement.

We’ll need two methods, a show and a dismiss. The show method takes the alert view, which is offscreen before show is called, and uses a UISnapBehavior to move it to the center of the screen.

We decrease the damping so the view has a little less spring.

For our dismiss method, we want to add gravity to our simulation and add some angular velocity to the alert view (to make it spin as it falls off the screen).

The code for this UIAlertView replacement is available on GitHub.

Let’s look at a more complex example. We’re going to recreate a common element in iOS apps: the sidebar menu. We’re going to have ours opened only with a swipe from the left edge of the screen (using the new UIScreenEdgePanGestureRecognizer in iOS 7), and closed only with a swipe from the right edge of the screen.

In order to coach the user that they need to drag from the edge of the screen, we’ll use a button that, when tapped, applies a momentary force and causes the content view to bounce slightly.

This is achieved easily enough. We’ll set up our dynamic animator and behaviours in viewDidAppear: (so that we can trust the view geometry, which isn’t always accurate in viewDidLoad).

Notice that the pushBehavior has no force applied to it. We’ll only apply force when that button is pressed.

That will set the force vector for the pushBehavior and activate it (it deactivates itself after being applied).

And that’s it. Imagine how much code that would have involved using CAKeyframeAnimations or UIView animations. UIKit Dynamics has made it easy to make realistic animations.

Now for actually opening the menu. If you recall, we created our collision boundaries to have a right edge inset of -280 points (meaning our contentView will never pass that boundary). All we need to do to open the menu is reverse the x-component of the gravity vector. This will force the content view to “fall” to the right edge of the boundary.

In order to accomplish this, we’ll use a UIScreenEdgePanGestureRecognizer. When our recognizer first begins, we’ll attach a UIAttachmentBehavior to our contentView. Whenever our recognizer changes, we’ll just update our attachment behaviour’s anchor point.

When our recognizer’s state is “ended”, we’ll check the velocity to decide which direction the user was last moving – to open the menu or close it. We’ll set our gravity’s direction accordingly and use that velocity to “push” our view in that direction.

Sweet! That’s really all there is to it. You can download the complete source for that example from GitHub.

This is a pretty common pattern when combining pan gesture recognizers with UIKit Dynamics. When the gesture begins, remove gravity or other conflicting forces and add an attachment behaviour. When it changes, update the attachment. When it ends, remove the attachment and re-add gravity.

One final note is that the physics engine that powers UIKit Dynamics expects sane values. That means don’t use CGFLOAT_MAX or CGFLOAT_MIN because they won’t work as you’d expect (I tried).

UIKit Dynamics are not meant to replace UIView animations or CAKeyFrameAnimations – they all solve specific problems and all have their place in your developer tool belt. UIKit Dynamics are also not meant to be used to write games – Apple’s recommendation is to use SpriteKit for that instead.

The interface interactions possible using UIKit Dynamics are just too innumerable to cover everything in one article. I would highly encourage you to take what we discussed today and try implementing some simple animations in your own apps.

Previous Posts in the iOS 7 series:

]]>
/blog/introduction-to-uikit-dynamics/feed/ 0
Introduction to UIMotionEffect /blog/introduction-to-uimotioneffect/ /blog/introduction-to-uimotioneffect/#respond Thu, 19 Sep 2013 14:03:19 +0000 /blog/?p=10891 When Apple announced iOS 7, they presented the world with a much “flatter” design than iOS 6. Gradients and shadows were muted, replacing some of the key elements of the operating system which were traditionally used to convey a sense of depth. Apple introduced a new way to convey depth: motion effects.

As its name implies, these are not strictly visual effects. Motion effects are related to the orientation of the user’s device. They provide an easy way to map real-world attributes of a device – its orientation – to the interface of your application (this is something we’ve done before).

Motion effects affect the appearance of the interface as the device is tilted horizontally and vertically.

Click for image source

For example, the red box below moves 50 points to the left and right as the device is tilted. Creating this effect is relatively easy. First, let’s take a look at the context of motion effects within UIKit.

UIMotionEffect is an abstract class that’s meant to be subclassed. Luckily, Apple has provided just such a subclass for us that covers 99% of your needs: UIInterpolatingMotionEffect. This class is initialized with a key path and a type (either horizontal or vertical motion). The class will set the key path’s value depending on how the user moves the device.

After creating an instance of UIInterpolatingMotionEffect, specify a minimum and maximum value. These properties are id, so you’ll need to box them in NSValue objects. Finally, add the motion effect to a view. The motion effect applied to the red square above can be recreated and applied with the following code:

We can go even further. For example, by changing the shadow offset on the view’s layer:

Now the shadow will move underneath the view, adding to the sense of depth.

Instead of creating effects and adding them individually, we can create a UIMotionEffectGroup. This worked similarly to a CAAnimationGroup.

   UIInterpolatingMotionEffect *verticalMotionEffect = [[UIInterpolatingMotionEffect alloc] initWithKeyPath:@"center.y" type:UIInterpolatingMotionEffectTypeTiltAlongVerticalAxis];

   verticalMotionEffect.minimumRelativeValue = @(-50);

   verticalMotionEffect.maximumRelativeValue = @(50);

   UIInterpolatingMotionEffect *horizontalMotionEffect = [[UIInterpolatingMotionEffect alloc] initWithKeyPath:@"center.x" type:UIInterpolatingMotionEffectTypeTiltAlongHorizontalAxis];

   horizontalMotionEffect.minimumRelativeValue = @(-50);

   horizontalMotionEffect.maximumRelativeValue = @(50);



   UIMotionEffectGroup *group = [UIMotionEffectGroup new];

   group.motionEffects = @[horizontalMotionEffect, verticalMotionEffect];

   [redView addMotionEffect:group];

Beyond simple effects like modifying the position of a view, we can also affect animatable properties on the CALayer of the view.

UIInterpolatingMotionEffect *shadowEffect = [[UIInterpolatingMotionEffect alloc] initWithKeyPath:@"layer.shadowOffset" type:UIInterpolatingMotionEffectTypeTiltAlongHorizontalAxis];

shadowEffect.minimumRelativeValue = [NSValue valueWithCGSize:CGSizeMake(-10, 5)];

shadowEffect.maximumRelativeValue = [NSValue valueWithCGSize:CGSizeMake(10, 5)];

What about creating your own motion effects? Apple’s built-in UIInterpolatingMotionEffect class does a great job, but it can only interpolate linearly between the minimum and maximum values. If you want a different interpolation curve, like a parabola that starts very subtle and becomes quite obvious as the user shifts their device more, we can build that as a UIMotionEffect subclass..

Subclassing UIMotionEffect is pretty straightforward. There are two protocols, NSCopying and NSCoding that need to be conformed to. Beyond that boilerplate, we also have keyPathsAndRelativeValuesForViewerOffset: to implement. This returns a dictionary of key paths and values to modify based on the current viewer offset passed in as a parameter.

Let’s look at an example. We’re going to implement a simple parabolic interpolation on a float value. Using float instead of id lets us get away with a lot less code in our example. Otherwise, we’re going to copy the interface or UIInterpolatingMotionEffect.

@interface TLParabolicFloatMotionEffect : UIMotionEffect

- (instancetype)initWithKeyPath:(NSString *)keyPath type:(UIInterpolatingMotionEffectType)type;

@property (readonly, nonatomic) NSString *keyPath;
@property (readonly, nonatomic) UIInterpolatingMotionEffectType type;

@property (assign, nonatomic) CGFloat minimumRelativeValue;
@property (assign, nonatomic) CGFloat maximumRelativeValue;

@end

Now we have our implementation:

 

@implementation TLParabolicFloatMotionEffect

#pragma mark - NSCopying and NSCoding Methods

- (id)copyWithZone:(NSZone *)zone {
   TLParabolicFloatMotionEffect *otherEffect = [super copyWithZone:zone];
   
   if (otherEffect) {
       otherEffect->_minimumRelativeValue = self.minimumRelativeValue;
       otherEffect->_maximumRelativeValue = self.maximumRelativeValue;
       otherEffect->_type = self.type;
       otherEffect->_keyPath = self.keyPath;
   }
   
   return otherEffect;
}

- (void)encodeWithCoder:(NSCoder *)aCoder {
   [super encodeWithCoder:aCoder];
   
   [aCoder encodeObject:@(self.minimumRelativeValue) forKey:@"minimumRelativeValue"];
   [aCoder encodeObject:@(self.maximumRelativeValue) forKey:@"maximumRelativeValue"];
   [aCoder encodeObject:@(self.type) forKey:@"type"];
   [aCoder encodeObject:self.keyPath forKey:@"keyPath"];
}

- (id)initWithCoder:(NSCoder *)aDecoder {
   if (!(self = [super initWithCoder:aDecoder])) return nil;
   
   _minimumRelativeValue = [[aDecoder decodeObjectForKey:@"minimumRelativeValue"] floatValue];
   _maximumRelativeValue = [[aDecoder decodeObjectForKey:@"maximumRelativeValue"] floatValue];
   _type = (UIInterpolatingMotionEffectType)[[aDecoder decodeObjectForKey:@"type"] integerValue];
   _keyPath = [aDecoder decodeObjectForKey:@"keyPath"];
   
   return self;
}

#pragma mark - Public Methods

- (instancetype)initWithKeyPath:(NSString *)keyPath type:(UIInterpolatingMotionEffectType)type {
   if (!(self = [super init])) return nil;
   
   _type = type;
   _keyPath = keyPath;
   
   return self;
}

#pragma mark - UIMotionEffects Methods

- (NSDictionary *)keyPathsAndRelativeValuesForViewerOffset:(UIOffset)viewerOffset {
   CGFloat ratio = self.type == UIInterpolatingMotionEffectTypeTiltAlongHorizontalAxis ? viewerOffset.horizontal : viewerOffset.vertical;
   
   if (ratio > 0) {
       return @{self.keyPath: @(self.maximumRelativeValue * pow(ratio, 2))};
   }
   else {
       // Need to negate the square to get a value < 0
       return @{self.keyPath: @(self.maximumRelativeValue * -pow(ratio, 2))};
   }
}

@end

Motion effects are meant to be *declarative*, which means that the return value from the keyPathsAndRelativeValuesForViewerOffset: method should always be the same for the same viewerOffset parameter. Don’t rely on internal state to determine the return value of this function.

There are a few caveats to using motion effects. They’re treated like animations on the view, so you can add and remove them in animation blocks to have their effects ease in and out as they’re added or removed. Adding or removing an effect to a visible, on-screen view should almost always be done inside of an animation block to prevent the view from jumping suddenly.

[UIView animateWithDuration:0.5f animations:^{
    [self.redView removeMotionEffect:group];
}];

Animating changes to properties of the view while a motion effect is added is also problematic. For example, if you’re animating the position while a motion effect affecting the position is applied to the view, the resulting animation suddenly removes the motion effect before beginning. Not a desirable effect.

You should take care not to go overboard with these effects – they are meant to be subtle effects used to convey a sense of depth, not a sense of nausea.

Previous Posts in the iOS 7 series:

]]>
/blog/introduction-to-uimotioneffect/feed/ 0
Adopting iOS 7 APIs /blog/adopting-ios-7-apis/ /blog/adopting-ios-7-apis/#respond Wed, 18 Sep 2013 13:46:40 +0000 /blog/?p=10885 iOS 7 changed the game in terms of application design and development. We’ve already released our iOS 7 PSD in order to help designers get a leg-up on the new visual feel, so let’s go ahead and explore some of the new APIs that developers need to adopt.

This isn’t meant to be a comprehensive document – these are only the most important things developers need to be aware of in order to migrate to iOS 7.

Tint Colour

In early versions of iOS, developers used the tintColor property on UIToolbar to affect the rendering of bar button items in that instance. Later, in iOS 5, tintColor was introduced to more views to help developers style their apps. iOS 7 takes this move a step further and declares the property on UIView itself, introducing an app-wide tintColor property (that of the app’s keyWindow).

The tintColor property cascades down the view hierarchy, much like CSS. A view will inherit its parent’s tintColor unless one is explicitly set on that specific view. The tintColor is meant to help define the personality for your application. The Calendar application, for example, uses a nice shade of red for it’s colour.

In addition to the tintColor property, Apple also introduced a tintAdjustmentMode. This is important because when a view hierarchy becomes inactive (for example, when a UIAlertView is presented), the tintColor changes to become desaturated. You can override this behaviour by setting the tintAdjustmentMode to UIViewTintAdjustmentModeNormal instead of UIViewTintAdjustmentModeAutomatic. You can also implement your own modal dialogues by setting the value to be UIViewTintAdjustmentModeDimmed. This property cascades down the view hierarchy in the same way that tintColor does.

Status Bar Changes

The status bar has been a mainstay of the iOS interface since the original iPhone launch in 2007. It has different modes: semitransparent, black, and default (blue). In iOS 6, we saw the default mode adopt a gradient matching the tintColor of the navigation bar, if any.

iOS 7 introduces a whole new style for the status bar. It no longer rests above apps by default, but is rendered over-top of the app’s content. Status bars no longer render a background at all – they are transparent, taking on the look of whatever app is on screen at the moment.

This new visual treatment invalidates the older APIs. Instead of setting the status bar style as a global, mutable variable belonging to the shared UIApplication instance, the topmost view controller is queried about its preference for the status bar’s appearance. This includes both the colour of the status bar contents (light or dark), whether or not the bar is hidden, and the animation to use to transition between hidden states. If the appearance changes, all the view controller has to do is call setNeedsStatusBarAppearanceUpdate on itself to force the app to re-query the view controller.

If you really want to opt out of this behaviour, set UIViewControllerBasedStatusBarAppearance in your Info.plist to NO. This is only a stop-gap, though. I expect that attribute will be deprecated shortly.

The new effect also has left the wantsFullScreenLayout property on UIViewController deprecated. On iOS 7, it’s as though wantsFullScreenLayout is always set to YES.

Asset Catalogs

When Apple first introduced the iPhone SDK in 2008, developers used a method called imageNamed: to retrieve image assets from our app bundle. When the iPad was introduced, a naming convention was made to help us differentiate between iPhone- and iPad-specific assets. Images were named “image~iPad.png” or “image~iPhone.png”. Then Retina screens were introduced and another layer was added onto the naming convention. Files were named “image~iPad@2x”. Or was it “image@2x~iPad”? I can never remember.

That’s the problem that Asset Catalogs solve: managing image assets used through your app.

What’s really great about Asset Catalogs is how they integrate with .xibs and Storyboards. You can specify the slicing information in the Asset Catalog and see it updated within Interface Builder.

Getting started isn’t hard. Create a new Asset Catalog and select it. Click the plus button in the lower lefthand corner of the Asset Catalog and select “Import From Project…” and the rest is pretty automatic.

Once you’ve imported your assets into the Asset Catalog, you’ll be able to use UIImage’s imageNamed: method like you’ve always done. However, you’ll have to update use of image assets in .xibs and Storyboards to remove the file extension. Not a huge deal, but it could be some mundane work.

One issue that we had when dealing with Asset Catalogs was localization. If your application has localized images, for instance, you’ll be unable to use Asset Catalogs for those particular assets. You can use Asset Catalogs along side the traditional  image asset workflow.

A Tale of Two OS’s

During the WWDC keynote in June, Apple made it very clear that they would prefer that developers drop support for iOS 6 as soon as possible. Fortunately, they are providing tools in the new Xcode 5 to allow you to support both iOS 6 and 7. Use the Assistant Editor to preview your app’s interface in iOS 6 and earlier.

However, those tools come with strings attached. The easiest way to support both OS versions is to use Autolayout. Not using Autolayout yet? “What a great time to transition!”, says Apple. To help, they’ve introduced better tools for using Autolayout in Interface Builder.

That’s fine for .xibs and Storyboards, but what about the code? Well, it’s easy to write separate code paths for the two OS versions using UIDevice’s systemVersion property. However, Apple would really prefer you to use Interface Builder when possible. Not the best answer, since not every problem has a solution that can be solved in Interface Builder. Let’s cross our fingers that iOS 7 is adopted quickly by consumers.

iOS 7 migration is going to take a lot of work. Ideally, you’ve been preparing for this since iOS 7 Beta was released. If not, you should begin right away. As we discussed a few months ago, it’s incredibly important to be up-to-date in order to stay relevant to your users.

Previous Posts in the iOS 7 series:

]]>
/blog/adopting-ios-7-apis/feed/ 0
Reproducing the iOS 7 Mail App’s Interface /blog/reproducing-the-ios-7-mail-apps-interface/ /blog/reproducing-the-ios-7-mail-apps-interface/#respond Tue, 20 Aug 2013 13:25:54 +0000 /blog/?p=10775 iOS 7 introduced a whole new visual layer applied to its existing information architecture. One of the more interesting changes it made to the familiar gestures was how it augmented the swipe-to-delete gesture in the Mail app. The gesture for swiping a table view cell revealed not only a delete button, but also a “More” button that would open an action sheet with more options.

When I saw this in the WWDC keynote, I was excited. Hopefully, I thought, Apple would introduce this as a system-wide affordance so other apps could use this gesture to display custom buttons. I’m not able to comment on unreleased beta software, but I will say that Apple has a habit of using custom, private APIs that we don’t have access to. A shame for sure, but also an opportunity to build something fun.

This brings us to today’s tutorial. How can we recreate this interface? Since it’s a useful gesture to use now before iOS 7 is released, we’re going to be building against the iOS 6 SDK, but I’ve ensured that this works against the iOS 7 beta 4 APIs.

Based on the feel of the iOS 7 interface, I’d guess that Apple has implemented the swipe-to-show-options using a UIScrollView (something I can neither confirm nor deny based on inspection using Reveal). On that hunch, I’m going to implement our own version using a scroll view.

It’s important to note before going any further that we’re going to do all of our work inside the cell’s contentView, as per the documentation. Here’s what I’m picturing for the view hierarchy.

The contentView is going to contain a scroll view, which contains the buttons and other contents. We’ll set up our scroll view and add its subviews in our awakeFromNib method (or initWithStyle:reuseIdentifier:, as you prefer). We’re using kCatchWidth as a constant for how far the user has to drag before the scroll view “catches” and, when released, will show the buttons. It’s conveniently also equal to the combined width of both buttons.

We’re placing our buttons within their own view so that it’s easier to reposition. This is important because we need to counteract the scrolling behaviour to make the buttons appear to “stand still.” To achieve this effect, we’ll implement the scrollViewDidScroll: method.

This method also prevents the user from “pulling” the scroll view to the right, since we only want the swipe gesture to be accessible when pulling to the left. Next we’ll need to prevent the scroll view from coming to rest in between the states of showing the buttons and not showing them. To do this, we’ll implement another scroll view delegate method. This method is called when the scroll view is about to begin decelerating and it provides for us an opportunity to direct where the scroll view should come to rest.

This implementation forces the scroll view to come to rest just showing the buttons if it was pulled beyond the buttons, or returns it to the resting CGPointZero state if it hasn’t been pulled over far enough. Notice that we’re manipulating a pointer to a CGPoint and not a CGPoint itself. It’s also important to note that we have to manually call setContentOffset:animated: to return the scroll view to its default state. This is to workaround a strange flicker in the animation that I noticed in testing.

The next step is to register ourselves as a listener for a custom notification that indicates that the containing table view has been scrolled, and that we should reposition our scroll view to its default state of not showing the buttons.

The remainder of the implementation is very straightforward. A few delegate protocols here, an NSNotification there, and we’re done! It’s all documented in the code comments in the open source repository.

Happy coding!

Previous Posts in the iOS 7 series:

]]>
/blog/reproducing-the-ios-7-mail-apps-interface/feed/ 0