At Teehan+Lax, we’ve been working on a project called Krush for several months now. Krush is an interesting application from an iOS architectural standpoint because it touches on a lot of common areas that iOS newcomers have questions about. Specifically, it’s a networked application that hits an API, has an on-disk cache, and presents interesting content. In this post, I’ll be exploring some case studies about aspects of the application: why we chose a certain methodology, how it worked out in practice, and what we would do in hindsight.
We launched Krush as an minimum viable product in 90 days, so the motivation behind “why” we chose certain methodologies was primarily based on speed: how quickly can we get to the minimum set of features and capabilities that are required to get something testable to market, and how fast can we iterate on it afterward? These motivations impacted the decisions we made, so you should look at our decisions through that lens if your motivations are different.
Case Study 1: The Network Layer
The network layer was primarily constructed by my talented colleague Brendan Lynch. The network layer is responsible for all outgoing connections from Krush, be they calls to the server’s API or to our CDN for asset delivery. Everything goes through a common interface.
Instead of using newer APIs like NSURLSession, we opted to use the more familiar network operations technology. Specifically, we used a request client that belonged to our app delegate that managed all network activity. This request client holds an NSOperationQueue where our network requests are queued.
The network requests themselves consist of a URL, parameters, and encoding specifications for OAuth. The requests objects know how to construct OAuth NSURLRequests, making replaying requests in the case of a failed connection request trivial. Network requests subclass NSOperation and conform to the NSURLConnectionDataDelegate protocol.
If the network request fails or times out, the request client will re-enqueue it automatically, up to a certain number of times, at which case it finally fails.
Every operation has a callback block. When an operation completes or fails, that block is invoked, passing along the data returned from the network and the result of the operation. The callback blocks, which are defined in the request client, transform that data into the on-disk cache, which we’ll cover in the next section.
This network architecture works well in practice. When a request does fail, it’s automatically restarted, so our application is very robust. By going with a familiar approach, instead of a newer iOS 7 API, we were able to get a product out the door faster.
If we had to do it all over, it might be worth investigating NSURLSession in order to reduce code effort and to take advantage of iOS 7’s background fetch API. I’d also want to explore using the idea of using commands sent from the view controller up the responder chain to the app delegate, which could then forward them to the request client. That way we could completely decouple our view controllers from having to know about the request client at all.
Case Study 2: On-Disk Cache
Krush is a very visual application – it downloads and displays a lot of images. Those images, once decompressed from JPEGs into bitmaps for display, take up a lot of memory. A lot. Holding the entire contents of the application in memory is not an option, and downloading each asset every time it is to be displayed would take up far too much of the user’s network resources. The solution was to use an on-disk cache.
For Readability, Brendan built an on-disk storage system using SQLite, which he was familiar with. However, he was busy building the network layer while I was building the on-disk cache, and my SQLite Fu is weak. Instead, I relied on what I was familiar with: Core Data.
Core Data isn’t an object persistence library per se, but rather an object graph management framework that just happens to be able to persist data to an on-disk store. We use it as a cache; the store is deleted with every launch of the application.
Application startup is one of the most crucial aspects of an application. If an application doesn’t get up and running in a reasonable amount of time, the user is going to give up on it. In the case of Krush, we were getting feedback from the users and the client that the application was slow at startup. Uh oh.
I opened Instruments and tested the application startup time on a device.
Oh boy were there a lot of network connections being made. In one trace, I measured 170 network requests when the app was first launched. It turned out that we were making lots of requests preemptively instead of on-demand. I changed our network requests to be less optimistic and more on-demand, which was an easy change to make. However, that change lead to a lot of interface jitteriness. Again, I measured.
We launched Krush using a very simple Core Data cache because we didn’t have a lot of time to invest in anything more complex. The stack consisted of a single managed object context on the main thread. I’ve never been a fan of prematurely optimizing a problem, anyway; I prefer a measure-adjust-measure cycle. When I measured for jitteriness in the interface, I saw the problem immediately: Core Data was blocking the main thread.
I did some research and decided to use a different approach. The request client instance would own a background context that would do work on its own queue; the background queue and main thread queue would share a single persistent store coordinator.
Let’s take a look at an example network request for the details of a user.
The user object already exists in the main managed object context, but not necessarily the background context. We have to save the main context, ensuring the object exists in the persistent store. Then we grab the objectId from the user and, in the callback block from the network request, grab the corresponding user object from the background context. Here, in a background thread, we perform our JSON parsing and form relationships between the background context user and other objects in the background context. Finally, we save the background context, which would fire a notification to merge our background context changes into the main context. The corresponding views would be updated via KVO. Phew!
The results were dramatic. We trimmed our launch time significantly and made the whole interface a lot more responsive.
Ideally, all of our changes would be made on the background managed object context. If I had to redo this solution, I would make main managed object context model instances read-only (semantically) and only perform changes on the background context. That way, I would eliminate having to save the main context before access objects in the background context.
The lesson learned here is to always measure your application before launching. It only took a few days to really shore up the interface and startup time. If we had invested those days before the launch, we could have had a much smoother experience out of the gates instead of in our iteration phase.
Case Study 3: User Profile View
The Krush user profile is a complex thing. It was important to get right both from a design perspective and from a code perspective. The design we envisioned has three tabs: Krushes, Influence, and Network.
More than that, though, the tabs need to be modular because, for a brand’s user page, we would want different tabs. It’s an interesting architectural problem; how does one structure the code in such a way that it can be reused in a modular fashion?
We could have used child view controllers, but I wanted to try something more data-driven. Instead, I used only one table view controlled by a single UITableViewController. That controller has a strong property to a datasource, which conforms to a protocol.
The data source changed when a different tab was selected. Furthermore, when the data source changed, the table view is reloaded. Now, when the controller was queried by the table view about what to display, it itself then queried the datasource.
The datasources were used to populate the tab selection control, which we wrote ourselves. Depending on if the user being displayed was a brand, different datasources were available. By using ReactiveCocoa, we were able to derive the datasource state of the view controller in viewDidLoad. Our table view controller itself is very light in logic, instead delegating layout concerns to the datasources.
Each datasource is responsible for supplying information like the number of rows or the height for any given row, and also for laying out individual cells. Each data source also has a class property and reuseIdentifier, which were used to register custom UITableViewCell subclasses with the table view in viewDidLoad. Finally, each data source was also responsible for exposing a ReactiveCocoa signal that would trigger a tableview reload.
This datasource approach worked well when designs changed in our iteration phase of the project. It also kept code clean and decoupled. One weakness of this approach is that, when aspects of the Network tab design were integrated into the Krushes tab design, there wasn’t an easy way to share that logic between the two different datasources. I wish that Objective-C had language-level support for an abstract class because that could have helped reduce code duplication between datasources objects.
Case Study 4: MVVM on the Feed
Early, pre-release versions of the application had a simple feed and a simple user onboarding tour. When we demoed it to colleagues around the office, the tour was identified as a weakness in the initial user experience. Geoff suggested integrating info cards into the feed on the first launch to show the user how to use the app. That way, they don’t have to memorize instructions from the tutorial before they can even use the app.
At that moment, our feed view controller was using an NSFetchedResultsController to display contents of our Core Data store. Instead of integrating logic for the new onboarding cards into our feed view controller, I explored an emerging pattern in Objective-C: Model-View-ViewModel.
In a nutshell, we abstracted all logic for presenting content in our view controller into a view model, which was agnostic to the actual UI. The view model would only provide information like whether or not the Endorse and Save buttons should be visible, or the image to use for the specific table view cell. We also moved the fetched results controller delegate code from the view controller into the view model, which would insert onboarding models into an internal that array it maintained.
The view model would also be notified when the user was about to reach the end of the feed so that more results could be fetched, or when the user pulled-to-refresh.
This approach worked well when we integrated hashtags into the application. The same view controller was used, just with a different view model, with different presentation logic. By making our different view models conform to a common protocol that the view controller can rely on, we were able to keep our controller agnostic of what it was presenting, and how it was presenting it.
I’m very happy with how this approach worked out for us. If I had to do it over, I’d try harder to reduce code duplication between the different view models. Again, an abstract class could help here.
This was an exciting project for us here at Teehan+Lax. We learned a lot throughout the duration of the project and had a lot of fun doing it. We hope that by sharing some of the lessons we learnt during the project, developers can make their own awesome apps. Go do great stuff!