Imagine for a moment that you're developing applications for both iOS and macOS (and perhaps an Apple Watch app as well) that share functionality: perhaps they're the desktop and mobile versions of the same product, or perhaps they're two distinct apps which nonetheless rely on some algorithmic secret sauce you developed. Both projects are fairly large and complex, and undergoing active development, and there is a substantial amount of code shared between both apps, which is also undergoing development.
The simple approach would be to have the shared code in both apps' Xcode projects, separately. Of course, the problem with this is that the common code is in two distinct codebases, both of which are being developed. Unless special effort is taken to keep it synchronised, it will diverge, and the technical debt you will end up with will be two increasingly different versions of code which does (mostly) the same thing, which would then take increasingly more effort to unify into one body of code.
A better approach would be to take the common code (making sure, first of all, that it is nicely modular and loosely coupled to whatever uses it) and move it to a separate codebase, where it can be maintained separately, and then to modify the apps which depend on it to import this codebase. There are several ways to do this: one could use a feature of the version control system, such as git submodules, or one could use an Xcode-specific dependency management system such as Cocoapods or Carthage. Here, I will be using Carthage; my choice of it is because it is more lightweight than Cocoapods (and does not require replacing your XCode Project with an encompassing Workspace under its management), and also higher-level than git submodules. In particular, this approach involves creating a Framework project, containing your shared code (and unit tests for it) and encapsulating it into a Framework, placing that in a git repository, and configure the projects for the apps that use this to import the framework using Carthage.
A cross-platform Framework
We need the Framework to be a cross-platform framework. For the sake of simplicity, we assume that the code for it is not going to contain any platform-specific code such as UI code (though if it does, it can be implemented with conditional compilation). As such, we create a macOS “Cocoa Framework“ project, which has two targets: a Framework target and a Tests target for unit tests. (Making it a macOS project means, for one, that the tests can run without launching the iOS Simulator.)
We add our code to the Framework target (structured into a hierarchy of groups as we see fit), and then add some tests to the testing target and run the tests to make sure that the code compiles and works. Once that's done, all we need to do is to make it cross-platform, and make it work properly with Carthage.
Making it cross-platform is relatively straightforward; firstly, we need to add the other targets to the build settings: we go to the Build Settings tab of the target, and change Supported Platforms. Xcode's interface for this is a single-choice drop-down menu, so we need to select “Other” and then add all the platforms we wish to support, including both simulator and device variants. The names for these devices are somewhat idiosyncratic (often having originated before the modern marketing name for the platform was settled), and are:
- iphoneos and iphonesimulator for iOS.
- watchos and watchsimulator for watchOS.
- appletvos and appletvsimulator for tvOS.
The settings should look something like:
The framework project will also contain a header file, named typically name-of-framework.h, and declaring version number and string values. To make this cross-platform, we need to edit it slightly, replacing the line which reads:
#include "TargetConditionals.h" #if TARGET_OS_IPHONE #import <UIKit/UIKit.h> #else #if TARGET_OS_OSX #import <Cocoa/Cocoa.h> #endif #endif(Adding other cases for tvOS and watchOS if needed.)
After this, we can try selecting an iOS device type from the target menu in the Xcode menu bar and building the framework for iOS; it should succeed, as it does for macOS.
Preparing it for Carthage
Next, we need to put our framework on Git in a form that is useful to Carthage. This is fairly straightforward, though there are a few things to keep in mind. The way Carthage works is as follows: when you run carthage update, it reads a list of Git repositories from the Cartfile, and for each one, checks it out to the Carthage/Checkouts directory, runs a build with xcodebuild, and puts the resulting Frameworks under Carthage/Build, organised by platform. From there onward, it is your responsibility to incorporate the built product into your Xcode project, as in the Carthage instructions.
The good news is that a git repository of an Xcode project that produces Frameworks is already almost in Carthage-ready form. There are only a few things that need to be attended to, which are:
- As the Carthage documentation says, the scheme in the Framework project that builds the Framework needs to be shared. When the project is created, there is one scheme which is not shared. To share it, go to the Product | Scheme | Manage Schemes menu and check the “Shared” checkbox next to the scheme; like so:
- Schemes are stored in separate files within the .xcodeproj directory which are, by default, not checked into git. You will need to add these with git add myproject.xcodeproj/xcshareddata/xcschemes/* and check them in.
- Carthage uses git tags for versioning, which means that the repository containing the framework must have at least one tag consisting of a version number in the format v1.2. Make sure there is one and that it is pushed to the remote repository.
- Keep in mind that Swift Frameworks have their own namespaces, with everything being private by default. What this means is that any functions, classes or similar in your code will not be visible to the app that imports your framework unless they've been marked public.
Building the apps
Now that we have the a git repository containing our shared Framework, using it in either an iOS or macOS apps is straightforward; one just has to follow the Carthage instructions to add it to one's project.
Recently, I have been experimenting with writing macOS code which loads and uses AudioUnit instruments. More specifically, code which does this not with Apple's DLSMusicDevice or AUMIDISynth but with commercially available software instruments, from vendors such as Native Instruments, KORG and Applied Acoustic Systems. The end goal is to write an app which can open any instruments installed on the system, allow the user to interact with their user interface and play MIDI notes on them, with the results going through an AudioUnit graph of its own choosing. Which sounds fairly straightforward, or one would think so.
Fortunately, Apple have provided their own code examples, in the form of AudioUnitV3Example, a suite containing sample units and hosting applications for both macOS and iOS. Unfortunately, though, while this works perfectly with Apple's own units which come with the OS, its operation with third-party instruments, particularly larger and more complex ones, is variable. While some seem to work perfectly (KORG's recreation of their M1 synthesizer, for example), others sometimes crash the hosting app at various stages (Native Instruments' plugins have tended to do so when changing presets), and at other times behave erratically.
This appears to be an artefact of the AudioUnit V3 API, or possibly the internal code which bridges V2 AudioUnits to the V3 API. Presumably, while it works well enough for simple components, some more complex ones may have issues with initialisation, memory management or something similar, causing them to malfunction. The fact that, while there exist many apps which host software instruments, most of them are old enough to predate the AudioUnit V3 API, suggests that (at least for now), the key to stability is to work with the AudioUnit V2 API. Apps doing so will be in good company, alongside the likes of Logic and Ableton Live, and because of this, as V3-based software instruments come out, there will be strong incentive to ensure that the adapters allowing them to run in the V2 API will be solid. Of course, eventually the bugs will be fixed, and then the V2 API will be retired, so even if using the AudioUnit V3 API be postponed, it should not be abandoned.
One problem with the V2 API, however, is that it is old. It dates back to the Carbon days of MacOS, and doesn't even touch the Objective C runtime, let alone Swift. Objects are referred to by opaque integer handles (which may or may not be pointers to undocumented structures); they are created, modified and manipulated with functions which take pointers to memory buffers and return numeric status codes. While the AudioUnit system itself is elegant (being a network of composable unit generators with standard interfaces for parameters and a standard API for composition), the same can't be said, at least these days, for the V2 API for dealing with it. One can access most of it in Swift (with small amounts of C code for some parts, such as initialising structures which have not been bridged to Swift), but the code is not particularly Swifty.
I have written an example project which allows the user to load and play AudioUnit instruments, playing MIDI notes through the computer keyboard and operating the instrument's GUI in a window. This program is written in a modular fashion, and can be built to operate using either the V2 or V3 API; the default is V2, as that's the one that currently works correctly. The two API interfaces are in interchangeable classes implementing a protocol named InstrumentHost.[...]
The MPDluxe app, an iOS-based remote controller for the MPD music playing software, uses UICollectionViews to display the contents of directories; these are displayed in one vertically scrolling column on the narrow screen of the iPhone, or a number of columns, scrolling horizontally, on the iPad. Early versions of the app, which worked only on the iPhone, used a UITableView, which provided a convenient index bar down the right-hand side of the view, allowing the user to quickly navigate long directory listings by swiping down a line of headings (in this case, letters of the alphabet). UICollectionView is much more flexible than UITableView, but the downside of this is that conveniences such as index bars are not provided; if you need one, you have to implement it yourself. Thankfully, there exists at least one implementation (Ben Kreeger's BDKCollectionIndexView, a UIControl subclass you can add on top of your collection view). After moving to using UICollectionView, MPDluxe used this class.
The problem with one-level index bars is that they do not cope well with more fine-grained navigation. For example, imagine a long list of thousands of items (such as names) in alphabetical order; scrolling to the first letter gets you only so far. It would be good to be able to zoom in, and navigate, say, between the second letters. (One potential model for how this could be done is the transport control in Apple's QuickTime Player; drag left or right to move the playback position backward or forward. However, if you drag down, then dragging left or right moves the position more finely, letting you hone in more precisely on the position you're looking for.) So I started to write a new index control, which would allow this sort of control, which became KFIndexBar.
KFIndexBar looks much like the index bar in a UITableView; it displays a set of labels over a tinted background. Touching a letter changes its value, allowing the code it connects to to scroll its collection view appropriately. As with BDKCollectionView, it also supports a horizontal orientation, allowing it to display its labels along the bottom of a collection view, rather than down the right-hand side. However, the main user-facing difference is that, if the user touches the index bar and drags to the left, a gap opens below the currently touched top-level index, and fills with intermediate indices between it and the index below it. (For example, in an alphabetical index bar, touching the label for "A" and dragging left might open a set of secondary labels reading "AA", "AD", "AF", and so on; once opened, dragging over these will scroll to the relevant location.) The user can then drag over those, scrolling to any one of them.
Below the cut, I will discuss implementation details:[...]