Typeclass encoding in Swift

In Swift, if we want to sum the elements of an array, we might write a function like this:

func sumVecInt(xs: Array<Int>) -> Int {
  return xs.reduce(0) { $0 + $1 }

This works fine for Int, but what about when we want to sum an Array of Int8, Int32, or other number types? The + operator can only work on a fixed type, so using any other type will give us a type error. We would be forced to re-define this function every time we need a new type. That sucks!

This is when we need a Type Class.

To encode a type class, first we define a protocol. In our example, we are making a type class for all number types, so we would make the following trait:

protocol Num {
  typealias N
  func zero() -> N
  func succ(n: N) -> N
  func add(x: N, y: N) -> N
  func multiply(x: N, y: N) -> N

Next we can define an instance of this trait for every number type we want to implement this type class:

class NInt8: Num {
  typealias N = Int8
  func zero() -> N { return 0 }
  func succ(n: N) -> N { return n + 1 }
  func add(x: N, y: N) -> N { return x + y }
  func multiply(x: N, y: N) -> N { return x * y }

class NInt32: Num {
  typealias N = Int32
  func zero() -> N { return 0 }
  func succ(n: N) -> N { return n + 1 }
  func add(x: N, y: N) -> N { return x + y }
  func multiply(x: N, y: N) -> N { return x * y }

Here I defined both an Int8 and Int32 instance, but they don’t have to be limited to build-in types, you can use your own!

Now we can use the Num class to sum the elements of any Array<A> so long as the A is in Num. We define a function such as:

func sumVec<A, N: Num where N.N == A>(i: N, xs: Array<A>) -> A {
  return xs.reduce(i.zero(), { i.add($0, y: $1) })

That reads, sumVec works on any A, given an N that is in Num and the N.N type is equal to A. The first argument to the function is an instance of N. This instance is used to get the zero() and add(x, y) functions appropriate to the data type.

Using it in action:

let xs8: Array<Int8> = [0, 1, 4, 8]
let xs32: Array<Int32> = [0, 1, 4, 8]

let q = sumVec(NInt8(), xs8)   // 13
let r = sumVec(NInt32(), xs32) // 13

Note that we explicitly summon the NInt32() instance.

Now, when you make your own Number classes, you too can implement this trait and all your functions will work with this new type! Wow!

Scala programmers might notice this as the type class encoding, but passed explicitly. This is because swift does not have implicits. Haskell programmers will notice this as explicit type class dictionary passing.

It is worth noting that without some implicit value search, swift can not encode a real type classes. Instead, we are building ML-style modules, where the Num protocol is a Module Signature. Read more on Standard ML’s Module System here.


@jhaberstro has another encoding which uses extensions. This is much more Hasekll-like because a Type could only implement an instance once. So when multiple definitions are desired, newtypes should be used. Code here: jhaberstro/1a1787c701d0e2285919. Also there is a blog post with a modification of that approach Type classes in Swift.


  • Correction: Num is a ML Sig, not a Functor.