956

I was implementing an algorithm in Swift Beta and noticed that the performance was very poor. After digging deeper I realized that one of the bottlenecks was something as simple as sorting arrays. The relevant part is here:

let n = 1000000
var x =  [Int](repeating: 0, count: n)
for i in 0..<n {
    x[i] = random()
}
// start clock here
let y = sort(x)
// stop clock here

In C++, a similar operation takes 0.06s on my computer.

In Python, it takes 0.6s (no tricks, just y = sorted(x) for a list of integers).

In Swift it takes 6s if I compile it with the following command:

xcrun swift -O3 -sdk `xcrun --show-sdk-path --sdk macosx`

And it takes as much as 88s if I compile it with the following command:

xcrun swift -O0 -sdk `xcrun --show-sdk-path --sdk macosx`

Timings in Xcode with "Release" vs. "Debug" builds are similar.

What is wrong here? I could understand some performance loss in comparison with C++, but not a 10-fold slowdown in comparison with pure Python.


Edit: weather noticed that changing -O3 to -Ofast makes this code run almost as fast as the C++ version! However, -Ofast changes the semantics of the language a lot — in my testing, it disabled the checks for integer overflows and array indexing overflows. For example, with -Ofast the following Swift code runs silently without crashing (and prints out some garbage):

let n = 10000000
print(n*n*n*n*n)
let x =  [Int](repeating: 10, count: n)
print(x[n])

So -Ofast is not what we want; the whole point of Swift is that we have the safety nets in place. Of course, the safety nets have some impact on the performance, but they should not make the programs 100 times slower. Remember that Java already checks for array bounds, and in typical cases, the slowdown is by a factor much less than 2. And in Clang and GCC we have got -ftrapv for checking (signed) integer overflows, and it is not that slow, either.

Hence the question: how can we get reasonable performance in Swift without losing the safety nets?


Edit 2: I did some more benchmarking, with very simple loops along the lines of

for i in 0..<n {
    x[i] = x[i] ^ 12345678
}

(Here the xor operation is there just so that I can more easily find the relevant loop in the assembly code. I tried to pick an operation that is easy to spot but also "harmless" in the sense that it should not require any checks related to integer overflows.)

Again, there was a huge difference in the performance between -O3 and -Ofast. So I had a look at the assembly code:

  • With -Ofast I get pretty much what I would expect. The relevant part is a loop with 5 machine language instructions.

  • With -O3 I get something that was beyond my wildest imagination. The inner loop spans 88 lines of assembly code. I did not try to understand all of it, but the most suspicious parts are 13 invocations of "callq _swift_retain" and another 13 invocations of "callq _swift_release". That is, 26 subroutine calls in the inner loop!


Edit 3: In comments, Ferruccio asked for benchmarks that are fair in the sense that they do not rely on built-in functions (e.g. sort). I think the following program is a fairly good example:

let n = 10000
var x = [Int](repeating: 1, count: n)
for i in 0..<n {
    for j in 0..<n {
        x[i] = x[j]
    }
}

There is no arithmetic, so we do not need to worry about integer overflows. The only thing that we do is just lots of array references. And the results are here—Swift -O3 loses by a factor almost 500 in comparison with -Ofast:

  • C++ -O3: 0.05 s
  • C++ -O0: 0.4 s
  • Java: 0.2 s
  • Python with PyPy: 0.5 s
  • Python: 12 s
  • Swift -Ofast: 0.05 s
  • Swift -O3: 23 s
  • Swift -O0: 443 s

(If you are concerned that the compiler might optimize out the pointless loops entirely, you can change it to e.g. x[i] ^= x[j], and add a print statement that outputs x[0]. This does not change anything; the timings will be very similar.)

And yes, here the Python implementation was a stupid pure Python implementation with a list of ints and nested for loops. It should be much slower than unoptimized Swift. Something seems to be seriously broken with Swift and array indexing.


Edit 4: These issues (as well as some other performance issues) seems to have been fixed in Xcode 6 beta 5.

For sorting, I now have the following timings:

  • clang++ -O3: 0.06 s
  • swiftc -Ofast: 0.1 s
  • swiftc -O: 0.1 s
  • swiftc: 4 s

For nested loops:

  • clang++ -O3: 0.06 s
  • swiftc -Ofast: 0.3 s
  • swiftc -O: 0.4 s
  • swiftc: 540 s

It seems that there is no reason anymore to use the unsafe -Ofast (a.k.a. -Ounchecked); plain -O produces equally good code.

Nilesh R Patel
  • 629
  • 6
  • 17
Jukka Suomela
  • 11,423
  • 4
  • 32
  • 45
  • 21
    Here is another "Swift 100 times slower than C" question: http://stackoverflow.com/questions/24102609/why-swift-is-100-times-slower-than-c-in-this-image-processing-test – Jukka Suomela Jun 08 '14 at 09:02
  • 17
    And here is discussion on Apple's marketing material related to Swift's good performance in sorting: http://programmers.stackexchange.com/q/242816/913 – Jukka Suomela Jun 08 '14 at 09:43
  • 1
    It would be more interesting/informative to see a comparison to a sort function implemented in Python. Python's `sorted()` function is part of its runtime, which (I believe) is written in C. – Ferruccio Jun 08 '14 at 14:55
  • 1
    @Ferruccio: See edit 3. (It is not a sort function but I think it shows very well what kind of code performs poorly in Swift in comparison with everything else, including Python.) – Jukka Suomela Jun 08 '14 at 17:35
  • Can you compare it Java too? – ilhan Jun 08 '14 at 18:50
  • @ilhan: Done. (By the way, a naive Java compiler should produce *slower* code than a naive Swift compiler. In Java to compute `x[i]` you need to first check that `x != null` and then that `x.length > i`. In Swift we can skip the first check. Nevertheless, as we see in the benchmarks, Java wins Swift -O3 by a factor approx. 100.) – Jukka Suomela Jun 08 '14 at 19:01
  • 1
    Have you seen the part from the "The Swift Programming Language" iBook about for loops? It says that "\[i\] is a constant whose value is automatically set at the start of each iteration of the loop.". Maybe declaring it as `var i: Int` before the loop will change things? – fabian789 Jun 08 '14 at 19:44
  • @Jukka: Depends on the platform. Null check not required if the platform has virtual memory and does not use low memory addresses as valid memory locations (e.g. Windows and I think other OSes too); the MMU handles the null check in that case. Not surprising at all that a brand new front end for a new language does worse than a 6 year old, mature, front end. I suspect Apple will fix this before Swift is out of beta. – Billy ONeal Jun 09 '14 at 01:54
  • 2
    You can compile with: `xcrun --sdk macosx swift -O3`. It's shorter. – Southern Hospitality Jun 09 '14 at 17:33
  • 3
    [This](http://www.splasmata.com/?p=2798) link shows some other basic operations in comparison to Objective-C. – Wold Jun 10 '14 at 02:38
  • *Remember that Java already checks for array bounds*, bound checks are very likely to be removed as when the compiler can prove that. Java should run pretty much like C (once properly warmed up) in this simple case. Null checks are generally not performed directly but trapped by the hardware and the compiler can prove x[i] is not nul for sure - the compiler has to be beyond dumb to actually check for x being `null`. – bestsss Jun 10 '14 at 06:21
  • what's wrong with using swift's 'safety nets' in development and saving -Ofast for release? – Joseph Mark Jun 10 '14 at 08:00
  • 2
    @sjeohp, you do need the 'safety net' during production as the input varies. It's different to process values between 1-10 and multiply them compared to multiplying values in the range of 2^31. For instance the infamous heartbleed bug was caused by a lack of range check. – bestsss Jun 10 '14 at 09:13
  • sure, but if you're aware of the risks then surely you can sanitize your inputs where necessary to guarantee that overflow won't occur – Joseph Mark Jun 10 '14 at 09:28
  • not saying it's ideal but if performance is the priority then the risks at least seem manageable – Joseph Mark Jun 10 '14 at 09:34
  • 1
    @sjeohp, to put it simply we don't live in a perfect world and trying to do what you suggest in 1M LoC projects is far harder then you'd imagine. Bugs do exis, stack overflow (name of the site) was one of the most prevalent (and still is) and before the no-execute bit used to allow arbitrary code execution very often. Java DOES run with full range checks all the time and it doesn't really affect performance, having the checks and failing gracefully is a great feat for language. In recent years there was a huge security flaw due to bypassing it via Unsafe in seemingly well done code. – bestsss Jun 10 '14 at 09:34
  • 1
    Everyone knows that any iteration on iOS or OS X that has more than 10000 iterations should be done in C or C++. Where is the surprise? Is this a rethorical question? – brainray Jun 10 '14 at 22:39
  • 2
    By the way, `-Ofast` also disables checks for unwrapping nils; you can compile and run this "successfully": `let s: Double? = nil; println(s!)` – Jukka Suomela Jun 11 '14 at 19:00
  • 4
    With Beta 5 there has been substantial improvement in Swift's speed -- see [this post by Jesse Squires](http://www.jessesquires.com/apples-to-apples-part-two/) for more detail. – Nate Cook Aug 08 '14 at 18:12
  • 1
    Will you also update this for Swift 2.0 as it claims further performance increase. In my own tests, I found out that unless you compile with -Ounchecked it is 100000 slower even for simple loop tests. With -Ounchecked it is "only" 50 times slower. Still, it blows Python out of the water in both cases. – Arqu Jun 13 '15 at 07:59
  • The java number seems high so I tested it myself and got times of 50-60ms to run the code for "=" and 60-80ms if I use the "^=". Did you include the VM startup time in those figures, or perhaps you meant .02s? Java is usually as fast as C for this type of operation. Also java settles down to about .04(=) and .06(^=) when I run the loop repeatedly (allowing Java time to compile it into optimized machine language). The .04 may include test-breaking optimizations though. – Bill K Oct 19 '16 at 20:51

9 Answers9

471

tl;dr Swift 1.0 is now as fast as C by this benchmark using the default release optimisation level [-O].


Here is an in-place quicksort in Swift Beta:

func quicksort_swift(inout a:CInt[], start:Int, end:Int) {
    if (end - start < 2){
        return
    }
    var p = a[start + (end - start)/2]
    var l = start
    var r = end - 1
    while (l <= r){
        if (a[l] < p){
            l += 1
            continue
        }
        if (a[r] > p){
            r -= 1
            continue
        }
        var t = a[l]
        a[l] = a[r]
        a[r] = t
        l += 1
        r -= 1
    }
    quicksort_swift(&a, start, r + 1)
    quicksort_swift(&a, r + 1, end)
}

And the same in C:

void quicksort_c(int *a, int n) {
    if (n < 2)
        return;
    int p = a[n / 2];
    int *l = a;
    int *r = a + n - 1;
    while (l <= r) {
        if (*l < p) {
            l++;
            continue;
        }
        if (*r > p) {
            r--;
            continue;
        }
        int t = *l;
        *l++ = *r;
        *r-- = t;
    }
    quicksort_c(a, r - a + 1);
    quicksort_c(l, a + n - l);
}

Both work:

var a_swift:CInt[] = [0,5,2,8,1234,-1,2]
var a_c:CInt[] = [0,5,2,8,1234,-1,2]

quicksort_swift(&a_swift, 0, a_swift.count)
quicksort_c(&a_c, CInt(a_c.count))

// [-1, 0, 2, 2, 5, 8, 1234]
// [-1, 0, 2, 2, 5, 8, 1234]

Both are called in the same program as written.

var x_swift = CInt[](count: n, repeatedValue: 0)
var x_c = CInt[](count: n, repeatedValue: 0)
for var i = 0; i < n; ++i {
    x_swift[i] = CInt(random())
    x_c[i] = CInt(random())
}

let swift_start:UInt64 = mach_absolute_time();
quicksort_swift(&x_swift, 0, x_swift.count)
let swift_stop:UInt64 = mach_absolute_time();

let c_start:UInt64 = mach_absolute_time();
quicksort_c(&x_c, CInt(x_c.count))
let c_stop:UInt64 = mach_absolute_time();

This converts the absolute times to seconds:

static const uint64_t NANOS_PER_USEC = 1000ULL;
static const uint64_t NANOS_PER_MSEC = 1000ULL * NANOS_PER_USEC;
static const uint64_t NANOS_PER_SEC = 1000ULL * NANOS_PER_MSEC;

mach_timebase_info_data_t timebase_info;

uint64_t abs_to_nanos(uint64_t abs) {
    if ( timebase_info.denom == 0 ) {
        (void)mach_timebase_info(&timebase_info);
    }
    return abs * timebase_info.numer  / timebase_info.denom;
}

double abs_to_seconds(uint64_t abs) {
    return abs_to_nanos(abs) / (double)NANOS_PER_SEC;
}

Here is a summary of the compiler's optimazation levels:

[-Onone] no optimizations, the default for debug.
[-O]     perform optimizations, the default for release.
[-Ofast] perform optimizations and disable runtime overflow checks and runtime type checks.

Time in seconds with [-Onone] for n=10_000:

Swift:            0.895296452
C:                0.001223848

Here is Swift's builtin sort() for n=10_000:

Swift_builtin:    0.77865783

Here is [-O] for n=10_000:

Swift:            0.045478346
C:                0.000784666
Swift_builtin:    0.032513488

As you can see, Swift's performance improved by a factor of 20.

As per mweathers' answer, setting [-Ofast] makes the real difference, resulting in these times for n=10_000:

Swift:            0.000706745
C:                0.000742374
Swift_builtin:    0.000603576

And for n=1_000_000:

Swift:            0.107111846
C:                0.114957179
Swift_sort:       0.092688548

For comparison, this is with [-Onone] for n=1_000_000:

Swift:            142.659763258
C:                0.162065333
Swift_sort:       114.095478272

So Swift with no optimizations was almost 1000x slower than C in this benchmark, at this stage in its development. On the other hand with both compilers set to [-Ofast] Swift actually performed at least as well if not slightly better than C.

It has been pointed out that [-Ofast] changes the semantics of the language, making it potentially unsafe. This is what Apple states in the Xcode 5.0 release notes:

A new optimization level -Ofast, available in LLVM, enables aggressive optimizations. -Ofast relaxes some conservative restrictions, mostly for floating-point operations, that are safe for most code. It can yield significant high-performance wins from the compiler.

They all but advocate it. Whether that's wise or not I couldn't say, but from what I can tell it seems reasonable enough to use [-Ofast] in a release if you're not doing high-precision floating point arithmetic and you're confident no integer or array overflows are possible in your program. If you do need high performance and overflow checks / precise arithmetic then choose another language for now.

BETA 3 UPDATE:

n=10_000 with [-O]:

Swift:            0.019697268
C:                0.000718064
Swift_sort:       0.002094721

Swift in general is a bit faster and it looks like Swift's built-in sort has changed quite significantly.

FINAL UPDATE:

[-Onone]:

Swift:   0.678056695
C:       0.000973914

[-O]:

Swift:   0.001158492
C:       0.001192406

[-Ounchecked]:

Swift:   0.000827764
C:       0.001078914
Cœur
  • 32,421
  • 21
  • 173
  • 232
Joseph Mark
  • 8,900
  • 4
  • 26
  • 31
  • 25
    Using -emit-sil to output the intermediate SIL code shows what's being retained (argh, stack overflow is making this impossible to format). It's an internal buffer object in the Array. This definitely sounds like an optimizer bug, the ARC optimizer should be able to remove the retains without -Ofast. – Catfish_Man Jun 08 '14 at 09:17
  • 'll Just disagree that we have to use another language if want to use Ofast optimizations. It will have to deal similarly with the question of bounds checks and other minor problems if pick another language like C. The swift is cool precisely because it is to be secure by default and optionally fast and insecure if needed. This allows the programmer to debug your code as well, to make sure everything is ok and compile using Ofast. The possibility of using modern standards and yet have the power of an "unsafe" language like C is very cool. – Wallacy Jun 10 '14 at 17:26
  • @bestsss it's not appear be a problem, first because C has no type checks and is not a problem. In the fact, enforce to developer use a check version of any statement is bad, the developer need to take the control. And yes, this is a BETA language, exists some bugs. And Java and C# is very, very more slow than C, but swift is not, exists some problems now with the -O3 but will be adjusted for sure, i run the same code over Objective-C e works fine with -O3, in swift may will achieved a performance on -O3 similar to -Ofast like ObjC got, so swift will run very closely to C. – Wallacy Jun 11 '14 at 02:49
  • @bestsss About the ARC: I have multimedia application write on Objective-C and use in average 350 threads at same time, all over ARC. It's a very good feature, in the fact, on a previous version using GC i never got more than 150 threads without some issues. The problem is not the ARC, the problem is the current implementation of the swift compiler, the ObjC version of the ARC runs fine on -Onone without put the retain/release inside of simple for like that, doent make sense after all. Só this bug may be ajusted quickly, because on ObjC the compiler make the correct work. – Wallacy Jun 11 '14 at 02:52
  • Don't you think a speedup of 800x is very suspicious? Your benchmark is probably invalid. Maybe all code under test was deleted as an optimization. – usr Jun 13 '14 at 11:53
  • 2
    if you can tell me how it might be invalid please do. i always like to learn more – Joseph Mark Jun 13 '14 at 12:37
  • 3
    made a final update, Swift is now as fast as C by this benchmark using standard optimisations. – Joseph Mark Oct 09 '14 at 03:19
  • 4
    Tip: Both your Swift and C implementations of quicksort can be improved if your recurse on the *smallest* partition first! (Instead of recursing on the left partition always first.) Quicksort implemented with a simple pivot selection in the worst case takes O(n^2) time, but even in this worst case you only need O(log n) stack space by recursing on the smaller partition first. – Macneil Shonle Mar 16 '15 at 17:01
  • @Macneil: Doesn't that assume tail call optimisation? I doubt Swift does TCO because it uses reference counting. – J D Feb 20 '16 at 00:16
  • @JosephMark: How did you compile the C (compiler and settings)? – J D Feb 20 '16 at 00:19
  • It seems the comparison results depend on WHAT you're doing with arrays. Take a look at my answer. – Duncan C Feb 26 '16 at 03:06
  • @JonHarrop can't remember but it was llvm and would have been default xcode settings apart from optimization level, as both were the same project. – Joseph Mark Feb 26 '16 at 11:32
  • what is the actual command you're running? xcrun -sdk -O macosx swiftc File.swift doesn't work for me – Rodrigo Ruiz Jul 10 '16 at 16:51
  • check those timings with subsequent operations done with sorted array. Even writing array to file after timing check might not work, because last level of optimization may actually skip part of sorting process, reordering it to the point _past_ the checkpoint. If array not used? whole sort may be skipped altogether. Not sure about C, but C++ and Swift\C# compilers tend to do that.. that's why aliasing breach effect in C++ usually appears only in optimized program. – Swift - Friday Pie May 12 '17 at 09:48
  • A little bit of confusion here about in-place sorting definition term, if it means constants extra memory then the solution provided in this post is not in place. called `quicksort_swift` recursively will require to store it in heap memory O(logN) – Nhat Dinh Feb 19 '20 at 08:37
112

TL;DR: Yes, the only Swift language implementation is slow, right now. If you need fast, numeric (and other types of code, presumably) code, just go with another one. In the future, you should re-evaluate your choice. It might be good enough for most application code that is written at a higher level, though.

From what I'm seeing in SIL and LLVM IR, it seems like they need a bunch of optimizations for removing retains and releases, which might be implemented in Clang (for Objective-C), but they haven't ported them yet. That's the theory I'm going with (for now… I still need to confirm that Clang does something about it), since a profiler run on the last test-case of this question yields this “pretty” result:

Time profiling on -O3 Time profiling on -Ofast

As was said by many others, -Ofast is totally unsafe and changes language semantics. For me, it's at the “If you're going to use that, just use another language” stage. I'll re-evaluate that choice later, if it changes.

-O3 gets us a bunch of swift_retain and swift_release calls that, honestly, don't look like they should be there for this example. The optimizer should have elided (most of) them AFAICT, since it knows most of the information about the array, and knows that it has (at least) a strong reference to it.

It shouldn't emit more retains when it's not even calling functions which might release the objects. I don't think an array constructor can return an array which is smaller than what was asked for, which means that a lot of checks that were emitted are useless. It also knows that the integer will never be above 10k, so the overflow checks can be optimized (not because of -Ofast weirdness, but because of the semantics of the language (nothing else is changing that var nor can access it, and adding up to 10k is safe for the type Int).

The compiler might not be able to unbox the array or the array elements, though, since they're getting passed to sort(), which is an external function and has to get the arguments it's expecting. This will make us have to use the Int values indirectly, which would make it go a bit slower. This could change if the sort() generic function (not in the multi-method way) was available to the compiler and got inlined.

This is a very new (publicly) language, and it is going through what I assume are lots of changes, since there are people (heavily) involved with the Swift language asking for feedback and they all say the language isn't finished and will change.

Code used:

import Cocoa

let swift_start = NSDate.timeIntervalSinceReferenceDate();
let n: Int = 10000
let x = Int[](count: n, repeatedValue: 1)
for i in 0..n {
    for j in 0..n {
        let tmp: Int = x[j]
        x[i] = tmp
    }
}
let y: Int[] = sort(x)
let swift_stop = NSDate.timeIntervalSinceReferenceDate();

println("\(swift_stop - swift_start)s")

P.S: I'm not an expert on Objective-C nor all the facilities from Cocoa, Objective-C, or the Swift runtimes. I might also be assuming some things that I didn't write.

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
filcab
  • 1,884
  • 1
  • 15
  • 12
  • *The compiler might not be able to unbox the array or the array elements, though, since they're getting passed to sort(), which is an external function and has to get the arguments it's expecting.* That should not matter to a relatively good compiler. Passing metadata (in the pointer - 64bits offer a lot of levee) about the actual data and branching it in the called function. – bestsss Jun 10 '14 at 06:31
  • 3
    What exactly makes `-Ofast` "totally unsafe"? Assuming you know how to test your code and rule out overflows. – Joseph Mark Jun 10 '14 at 07:54
  • @sjeohp: That's actually assuming a lot :-) Checking the code and ruling out overflows is hard to do. From my experience (I do compiler work and have checked some big codebases), and what I've heard from people who do compiler work on huge companies, getting overflows and other undefined behavior right is **hard**. Even Apple's advice (just an example) on fixing UB is wrong, sometimes (http://randomascii.wordpress.com/2014/04/17/buggy-security-guidance-from-apple/ ). `-Ofast` also changes language semantics, but I can't fund any docs for it. How can you be confident you know what it's doing? – filcab Jun 10 '14 at 15:42
  • @bestsss: It's possible, but it might not be useful. It adds checks on every access to an Int[]. It depends if arrays of Int and a few other primitive types (you have, at most, 3 bits) are used a lot (especially when you can lower to C if you need to). It also uses up some bits that they might want to use if, eventually, they want to add non-ARC GC. It doesn't scale to generics with more than one argument, either. Since they have all the types, it would be much easier to specialize all code that touched Int[] (but not Int?[]) to use inlined Int. But then you have Obj-C interop to worry about. – filcab Jun 10 '14 at 15:57
  • @filcab, non-ARC (i.e. real) GC would be actually useful but they need something that's not C compatible if they want a truly concurrent, non-STW GC. I'd not worry about 'every access to `Int[]`' since that depends on the level the compiler can inline and it should be able to inline the tight loops with/after some guidance. – bestsss Jun 10 '14 at 19:24
56

I decided to take a look at this for fun, and here are the timings that I get:

Swift 4.0.2           :   0.83s (0.74s with `-Ounchecked`)
C++ (Apple LLVM 8.0.0):   0.74s

Swift

// Swift 4.0 code
import Foundation

func doTest() -> Void {
    let arraySize = 10000000
    var randomNumbers = [UInt32]()

    for _ in 0..<arraySize {
        randomNumbers.append(arc4random_uniform(UInt32(arraySize)))
    }

    let start = Date()
    randomNumbers.sort()
    let end = Date()

    print(randomNumbers[0])
    print("Elapsed time: \(end.timeIntervalSince(start))")
}

doTest()

Results:

Swift 1.1

xcrun swiftc --version
Swift version 1.1 (swift-600.0.54.20)
Target: x86_64-apple-darwin14.0.0

xcrun swiftc -O SwiftSort.swift
./SwiftSort     
Elapsed time: 1.02204304933548

Swift 1.2

xcrun swiftc --version
Apple Swift version 1.2 (swiftlang-602.0.49.6 clang-602.0.49)
Target: x86_64-apple-darwin14.3.0

xcrun -sdk macosx swiftc -O SwiftSort.swift
./SwiftSort     
Elapsed time: 0.738763988018036

Swift 2.0

xcrun swiftc --version
Apple Swift version 2.0 (swiftlang-700.0.59 clang-700.0.72)
Target: x86_64-apple-darwin15.0.0

xcrun -sdk macosx swiftc -O SwiftSort.swift
./SwiftSort     
Elapsed time: 0.767306983470917

It seems to be the same performance if I compile with -Ounchecked.

Swift 3.0

xcrun swiftc --version
Apple Swift version 3.0 (swiftlang-800.0.46.2 clang-800.0.38)
Target: x86_64-apple-macosx10.9

xcrun -sdk macosx swiftc -O SwiftSort.swift
./SwiftSort     
Elapsed time: 0.939633965492249

xcrun -sdk macosx swiftc -Ounchecked SwiftSort.swift
./SwiftSort     
Elapsed time: 0.866258025169373

There seems to have been a performance regression from Swift 2.0 to Swift 3.0, and I'm also seeing a difference between -O and -Ounchecked for the first time.

Swift 4.0

xcrun swiftc --version
Apple Swift version 4.0.2 (swiftlang-900.0.69.2 clang-900.0.38)
Target: x86_64-apple-macosx10.9

xcrun -sdk macosx swiftc -O SwiftSort.swift
./SwiftSort     
Elapsed time: 0.834299981594086

xcrun -sdk macosx swiftc -Ounchecked SwiftSort.swift
./SwiftSort     
Elapsed time: 0.742045998573303

Swift 4 improves the performance again, while maintaining a gap between -O and -Ounchecked. -O -whole-module-optimization did not appear to make a difference.

C++

#include <chrono>
#include <iostream>
#include <vector>
#include <cstdint>
#include <stdlib.h>

using namespace std;
using namespace std::chrono;

int main(int argc, const char * argv[]) {
    const auto arraySize = 10000000;
    vector<uint32_t> randomNumbers;

    for (int i = 0; i < arraySize; ++i) {
        randomNumbers.emplace_back(arc4random_uniform(arraySize));
    }

    const auto start = high_resolution_clock::now();
    sort(begin(randomNumbers), end(randomNumbers));
    const auto end = high_resolution_clock::now();

    cout << randomNumbers[0] << "\n";
    cout << "Elapsed time: " << duration_cast<duration<double>>(end - start).count() << "\n";

    return 0;
}

Results:

Apple Clang 6.0

clang++ --version
Apple LLVM version 6.0 (clang-600.0.54) (based on LLVM 3.5svn)
Target: x86_64-apple-darwin14.0.0
Thread model: posix

clang++ -O3 -std=c++11 CppSort.cpp -o CppSort
./CppSort     
Elapsed time: 0.688969

Apple Clang 6.1.0

clang++ --version
Apple LLVM version 6.1.0 (clang-602.0.49) (based on LLVM 3.6.0svn)
Target: x86_64-apple-darwin14.3.0
Thread model: posix

clang++ -O3 -std=c++11 CppSort.cpp -o CppSort
./CppSort     
Elapsed time: 0.670652

Apple Clang 7.0.0

clang++ --version
Apple LLVM version 7.0.0 (clang-700.0.72)
Target: x86_64-apple-darwin15.0.0
Thread model: posix

clang++ -O3 -std=c++11 CppSort.cpp -o CppSort
./CppSort     
Elapsed time: 0.690152

Apple Clang 8.0.0

clang++ --version
Apple LLVM version 8.0.0 (clang-800.0.38)
Target: x86_64-apple-darwin15.6.0
Thread model: posix

clang++ -O3 -std=c++11 CppSort.cpp -o CppSort
./CppSort     
Elapsed time: 0.68253

Apple Clang 9.0.0

clang++ --version
Apple LLVM version 9.0.0 (clang-900.0.38)
Target: x86_64-apple-darwin16.7.0
Thread model: posix

clang++ -O3 -std=c++11 CppSort.cpp -o CppSort
./CppSort     
Elapsed time: 0.736784

Verdict

As of the time of this writing, Swift's sort is fast, but not yet as fast as C++'s sort when compiled with -O, with the above compilers & libraries. With -Ounchecked, it appears to be as fast as C++ in Swift 4.0.2 and Apple LLVM 9.0.0.

Learn OpenGL ES
  • 4,341
  • 1
  • 32
  • 37
34

From The Swift Programming Language:

The Sort Function Swift’s standard library provides a function called sort, which sorts an array of values of a known type, based on the output of a sorting closure that you provide. Once it completes the sorting process, the sort function returns a new array of the same type and size as the old one, with its elements in the correct sorted order.

The sort function has two declarations.

The default declaration which allows you to specify a comparison closure:

func sort<T>(array: T[], pred: (T, T) -> Bool) -> T[]

And a second declaration that only take a single parameter (the array) and is "hardcoded to use the less-than comparator."

func sort<T : Comparable>(array: T[]) -> T[]

Example:
sort( _arrayToSort_ ) { $0 > $1 }

I tested a modified version of your code in a playground with the closure added on so I could monitor the function a little more closely, and I found that with n set to 1000, the closure was being called about 11,000 times.

let n = 1000
let x = Int[](count: n, repeatedValue: 0)
for i in 0..n {
    x[i] = random()
}
let y = sort(x) { $0 > $1 }

It is not an efficient function, an I would recommend using a better sorting function implementation.

EDIT:

I took a look at the Quicksort wikipedia page and wrote a Swift implementation for it. Here is the full program I used (in a playground)

import Foundation

func quickSort(inout array: Int[], begin: Int, end: Int) {
    if (begin < end) {
        let p = partition(&array, begin, end)
        quickSort(&array, begin, p - 1)
        quickSort(&array, p + 1, end)
    }
}

func partition(inout array: Int[], left: Int, right: Int) -> Int {
    let numElements = right - left + 1
    let pivotIndex = left + numElements / 2
    let pivotValue = array[pivotIndex]
    swap(&array[pivotIndex], &array[right])
    var storeIndex = left
    for i in left..right {
        let a = 1 // <- Used to see how many comparisons are made
        if array[i] <= pivotValue {
            swap(&array[i], &array[storeIndex])
            storeIndex++
        }
    }
    swap(&array[storeIndex], &array[right]) // Move pivot to its final place
    return storeIndex
}

let n = 1000
var x = Int[](count: n, repeatedValue: 0)
for i in 0..n {
    x[i] = Int(arc4random())
}

quickSort(&x, 0, x.count - 1) // <- Does the sorting

for i in 0..n {
    x[i] // <- Used by the playground to display the results
}

Using this with n=1000, I found that

  1. quickSort() got called about 650 times,
  2. about 6000 swaps were made,
  3. and there are about 10,000 comparisons

It seems that the built-in sort method is (or is close to) quick sort, and is really slow...

AstroCB
  • 11,800
  • 20
  • 54
  • 68
David Skrundz
  • 11,627
  • 5
  • 41
  • 64
  • 17
    Perhaps I am completely wrong, but according to http://en.wikipedia.org/wiki/Quicksort, the average number of comparisons in Quicksort is `2*n*log(n)`. That is 13815 comparisons for sorting n = 1000 elements, so if the comparison function is called about 11000 times that does not seem so bad. – Martin R Jun 08 '14 at 00:38
  • 6
    Also Apple claimed that a "complex object sort" (whatever that is) is 3.9 times faster in Swift than in Python. Therefore it should not be necessary to find a "better sorting function". - But Swift is still in development ... – Martin R Jun 08 '14 at 00:40
  • 2*n*log(n) for n=1000 works out to 6000. (unless log refers to the natural logarithm, but I'm used to ln() being used for that). I will update my answer with some new finding. – David Skrundz Jun 08 '14 at 01:25
  • 6
    It *does* refer to the natural logarithm. – Martin R Jun 08 '14 at 01:28
  • 24
    `log(n)` for algorithmic complexity conventionally refers to log base-2. The reason for not stating the base is that the change-of-base law for logarithms only introduces a constant multiplier, which is discarded for the purposes of O-notation. – minuteman3 Jun 08 '14 at 14:58
  • 2
    NOTE: That documentation does not match the implementation. sort sorts and then returns that very array as the return value. Check with === or alter a value in the returned array and check the result in the returned array. It does not return a new array as it says it will. – Joseph Lord Jun 09 '14 at 00:30
  • 3
    Regarding the discussion about natural logarithm vs base 2 logarithm: The precise statement from the Wikipedia page is that the average number of comparisons needed for n elements is `C(n) = 2n ln n ≈ 1.39n log₂ n`. For n = 1000 this gives C(n) = 13815, and it is _not_ a "big-O notation". – Martin R Jun 09 '14 at 12:00
18

As of Xcode 7 you can turn on Fast, Whole Module Optimization. This should increase your performance immediately.

enter image description here

Antoine
  • 21,544
  • 11
  • 81
  • 91
12

Swift Array performance revisited:

I wrote my own benchmark comparing Swift with C/Objective-C. My benchmark calculates prime numbers. It uses the array of previous prime numbers to look for prime factors in each new candidate, so it is quite fast. However, it does TONS of array reading, and less writing to arrays.

I originally did this benchmark against Swift 1.2. I decided to update the project and run it against Swift 2.0.

The project lets you select between using normal swift arrays and using Swift unsafe memory buffers using array semantics.

For C/Objective-C, you can either opt to use NSArrays, or C malloc'ed arrays.

The test results seem to be pretty similar with fastest, smallest code optimization ([-0s]) or fastest, aggressive ([-0fast]) optimization.

Swift 2.0 performance is still horrible with code optimization turned off, whereas C/Objective-C performance is only moderately slower.

The bottom line is that C malloc'd array-based calculations are the fastest, by a modest margin

Swift with unsafe buffers takes around 1.19X - 1.20X longer than C malloc'd arrays when using fastest, smallest code optimization. the difference seems slightly less with fast, aggressive optimization (Swift takes more like 1.18x to 1.16x longer than C.

If you use regular Swift arrays, the difference with C is slightly greater. (Swift takes ~1.22 to 1.23 longer.)

Regular Swift arrays are DRAMATICALLY faster than they were in Swift 1.2/Xcode 6. Their performance is so close to Swift unsafe buffer based arrays that using unsafe memory buffers does not really seem worth the trouble any more, which is big.

BTW, Objective-C NSArray performance stinks. If you're going to use the native container objects in both languages, Swift is DRAMATICALLY faster.

You can check out my project on github at SwiftPerformanceBenchmark

It has a simple UI that makes collecting stats pretty easy.

It's interesting that sorting seems to be slightly faster in Swift than in C now, but that this prime number algorithm is still faster in Swift.

Duncan C
  • 115,063
  • 19
  • 151
  • 241
8

The main issue that is mentioned by others but not called out enough is that -O3 does nothing at all in Swift (and never has) so when compiled with that it is effectively non-optimised (-Onone).

Option names have changed over time so some other answers have obsolete flags for the build options. Correct current options (Swift 2.2) are:

-Onone // Debug - slow
-O     // Optimised
-O -whole-module-optimization //Optimised across files

Whole module optimisation has a slower compile but can optimise across files within the module i.e. within each framework and within the actual application code but not between them. You should use this for anything performance critical)

You can also disable safety checks for even more speed but with all assertions and preconditions not just disabled but optimised on the basis that they are correct. If you ever hit an assertion this means that you are into undefined behaviour. Use with extreme caution and only if you determine that the speed boost is worthwhile for you (by testing). If you do find it valuable for some code I recommend separating that code into a separate framework and only disabling the safety checks for that module.

Joseph Lord
  • 6,136
  • 1
  • 24
  • 29
  • This answer is now out of date. As of Swift 4.1 the whole module optimisation option is a separate boolean that can be combined with other settings and there is now an -Os to optimise for size. I may update when I have time to check the exact option flags. – Joseph Lord Apr 16 '18 at 14:29
7
func partition(inout list : [Int], low: Int, high : Int) -> Int {
    let pivot = list[high]
    var j = low
    var i = j - 1
    while j < high {
        if list[j] <= pivot{
            i += 1
            (list[i], list[j]) = (list[j], list[i])
        }
        j += 1
    }
    (list[i+1], list[high]) = (list[high], list[i+1])
    return i+1
}

func quikcSort(inout list : [Int] , low : Int , high : Int) {

    if low < high {
        let pIndex = partition(&list, low: low, high: high)
        quikcSort(&list, low: low, high: pIndex-1)
        quikcSort(&list, low: pIndex + 1, high: high)
    }
}

var list = [7,3,15,10,0,8,2,4]
quikcSort(&list, low: 0, high: list.count-1)

var list2 = [ 10, 0, 3, 9, 2, 14, 26, 27, 1, 5, 8, -1, 8 ]
quikcSort(&list2, low: 0, high: list2.count-1)

var list3 = [1,3,9,8,2,7,5]
quikcSort(&list3, low: 0, high: list3.count-1) 

This is my Blog about Quick Sort- Github sample Quick-Sort

You can take a look at Lomuto's partitioning algorithm in Partitioning the list. Written in Swift.

Arsen Khachaturyan
  • 6,472
  • 4
  • 32
  • 36
Abo3atef
  • 2,497
  • 1
  • 31
  • 29
4

Swift 4.1 introduces new -Osize optimization mode.

In Swift 4.1 the compiler now supports a new optimization mode which enables dedicated optimizations to reduce code size.

The Swift compiler comes with powerful optimizations. When compiling with -O the compiler tries to transform the code so that it executes with maximum performance. However, this improvement in runtime performance can sometimes come with a tradeoff of increased code size. With the new -Osize optimization mode the user has the choice to compile for minimal code size rather than for maximum speed.

To enable the size optimization mode on the command line, use -Osize instead of -O.

Further reading : https://swift.org/blog/osize/

casillas
  • 14,727
  • 15
  • 94
  • 183