5

how do algorithms with double compete compared to int values? Is there much difference, or is it neglectable?

In my case, I have a canvas that uses Integers so far. But now as I'm implementing a scaling, I'm probably going to switch everything to Double. Would this have a big impact on calculations? If so, would maybe rounding doubles to only a few fractions optimize performance?

Or am I totally on the path of over-optimization and should just use doubles without any headache?

membersound
  • 66,525
  • 139
  • 452
  • 886
  • As many algorithms are still to be written, I'm interested though in the impact of using double instead of ints in general. – membersound Apr 30 '13 at 11:11
  • 1
    Depends on what you're doing. Best thing is to test both ways and compare them. – Bohemian Apr 30 '13 at 11:12
  • Create using double and check the time using `System.currentTimeMillis()` and check the performance of both – asifsid88 Apr 30 '13 at 11:15
  • This sounds very like premature optimisation, write it simply and clearly using the most appropriate tools (sounds like double in this case) and only optimise the code that’s a bottleneck. Otherwise you find that you've written a very confusingfunction that gives you a 90% speed boost (that sounds good) in a part of the program that is only responsible for 0.01% of the total cpu usage (aww shame). That said this is an interesting question from a theoretical point of vieww – Richard Tingle Apr 30 '13 at 11:37

5 Answers5

6

You're in GWT, so ultimately your code will be JavaScript, and JavaScript has a single type for numeric data: Number, which corresponds to Java's Double.

Using integers in GWT can either mean (I have no idea what the GWT compiler exactly does, it might also be dependent on context, such as crossing JSNI boundaries) that the generated code is doing more work than with doubles (doing a narrowing conversion of numbers to integer values), or that the code won't change at all.

All in all, expect the same or slightly better performance using doubles (unless you have to later do conversions to integers, of course); but generally speaking you're over-optimizing (also: optimization needs measurement/metrics; if you don't have them, you're on the “premature optimization” path)

Thomas Broyer
  • 63,827
  • 7
  • 86
  • 161
4

There is a sizable difference between integers and doubles, however generally doubles are also very fast.

The difference is that integers are still faster than doubles, because it takes very few clock cycles to do arithmetic operations on integers.

Doubles are also fast because they are generally natively supported by a floating-point unit, which means that it is calculated by dedicated hardware. Unfortunately, it generally is usually 2x to many 40x slower.

Having said this, the CPU will usually spend quite a bit of time on housekeeping like loops and function calls, so if it is fast enough with integers, most of the time (perhaps even 99% of the time), it will be fast enough with doubles.

The only time floating point numbers are orders of magnitude slower is when they must be emulated, because there is no hardware support. This generally only occurs on embedded platforms, or where uncommon floating point types are used (eg. 128-bit floats, or decimal floats).

The result of some benchmarks can be found at:

but generally,

  • 32-bit platforms have a greater disparity between doubles and integers
  • integers are always at least twice as fast on adding and subtracting
Community
  • 1
  • 1
ronalchn
  • 11,755
  • 10
  • 47
  • 60
2

If you are going to change the type integer to double in your program you must also have to rewrite those lines of code that comparing two integers. Like a and b are two integers and you campare if ( a == b) so after changing a, b type to double you also have to change this line and have to use compare method of the double.

Waqas Ali
  • 1,572
  • 4
  • 29
  • 52
1

Not knowing the exact needs of your program, my instinct is that you're over-optimizing. When choosing between using ints or doubles, you usually base the decision on what type of value you need over which will run faster. If you need floating point values that allow for (not necessarily precise) decimal values, go for doubles. If you need precise integer values, go for ints.

A couple more points:

Rounding your doubles to certain fractions should have no impact on performance. In fact, the overhead required to round them in the first place would probably have a negative impact.

While I would argue not to worry about the performance differences between int and double, there is a significant difference between int and Integer. While an int is a primitive data type that can be used efficiently, an Integer is an object that essentially just holds an int. This incurs a significant overhead. Integers are useful in that they can be stored in collections like Vectors while ints cannot, but in all other cases its best to use ints.

wheels
  • 198
  • 11
0

In general maths that naturally fits as an integer will be faster than maths that naturally fits as a double, BUT trying to force double maths to work as an integer is almost always slower, moving back and forth between the two costs more than the speed boost you get.

If you're considering something like:

I only want 1 decimal places within my 'quazi integer float' so i'll just multiply everything by 10;

5.5*6.5

so 5.5 --> 55 and 
so 6.5 --> 65

with a special multiplying function

public int specialIntegerMultiply(int a, int b){
    return a*b/10;
}

Then for the love of god don't, it'll probably be slower with all the extra overhead and it'll be really confusing to write.

p.s. rounding the doubles will make no difference at all as the remaining decimal places will still exist, they'll just all be 0 (in decimal that is, in binary that won't even be true).

Richard Tingle
  • 15,728
  • 5
  • 47
  • 71