23

There are many slim laptops who are just cheap and great to use. Programming has the advantage of being done in any place where there is silence and comfort, since concentrating for long hours is important factor to be able to do effective work.

I'm kinda old fashioned as I like my statically compiled C or C++, and those languages can be pretty long to compile on those power-constrainted laptops, especially C++11 and C++14.

I like to do 3D programming, and the libraries I use can be large and won't be forgiving: bullet physics, Ogre3D, SFML, not to mention the power hunger of modern IDEs.

There are several solutions to make building just faster:

  • Solution A: Don't use those large libraries, and come up with something lighter on your own to relieve the compiler. Write appropriate makefiles, don't use an IDE.

  • Solution B: Set up a building server elsewhere, have a makefile set up on an muscled machine, and automatically download the resulting exe. I don't think this is a casual solution, as you have to target your laptop's CPU.

  • Solution C: use the unofficial C++ module

  • ???

Any other suggestion ?

jokoon
  • 5,251
  • 10
  • 42
  • 78
  • 8
    - using precompiled headers - reduce including headers in other header files and use forward declaration, if possible – Mr.Yellow Apr 09 '15 at 13:50
  • Qualified questions don't make sense, like "can you greatly improve?". The question must be unqualified ("can you improve?"), and the answer is qualified ("you can improve greatly"/"you can improve a bit"/"you cannot improve"). – Kerrek SB Apr 09 '15 at 14:11
  • If your time is worth it, a non-cheap laptop can improve timing across the board. Non-cheap doesn't have to mean thousands more, but just hundreds more. If your laptop can support it, a separate solid state drive for compiler and stable libraries also help. RAM drive for compiler output also helps greatly. – franji1 Apr 09 '15 at 14:12
  • 2
    Measure wether compiling or linking is the timesink. If it is compiling, then disable optimizing and maybe use precompiled headers. If it is linking, then build and link against dynamic libs instead of static libs. – Markus Kull Apr 09 '15 at 14:15
  • What is you toolchain? Clang is faster then ICC, GCC and Visual C++. gold-linker and lld are faster then ld. CMake links faster than Autotools with libtool. Ninja is faster than make. Hardware-wise: A SSD is faster then a harddrive. – usr1234567 Apr 09 '15 at 16:09
  • 4
    What is "the unofficial C++ module"? – Lightness Races in Orbit Apr 09 '15 at 17:13
  • You should split your code up into several small modules. Most development should only involve compiling a few modules and then linking with the remaining modules (that haven't changed). You may want to consider making libraries too. – Thomas Matthews Apr 09 '15 at 17:29
  • 1
    You're not clear on what exactly is taking a long time. The fact that you are using feature-rich libraries should be irrelevant. Are you re-compiling third-party libraries over and over? You shouldn't. – screwnut Apr 09 '15 at 18:53
  • @LightningRacisinObrit I think unofficial modules is a reference to Clang's [http://clang.llvm.org/docs/Modules.html ] module implementation. I think it's still in a non-production ready stage for C++. – Phil Wright Apr 10 '15 at 07:44

4 Answers4

20

Compilation speed is something, that can be really boosted, if you know how to. It is always wise to think carefully about project's design (especially in case of large projects, consisted of multiple modules) and modify it, so compiler can produce output efficiently.

1. Precompiled headers.

Precompiled header is a normal header (.h file), that contains the most common declarations, typedefs and includes. During compilation, it is parsed only once - before any other source is compiled. During this process, compiler generates data of some internal (most likely, binary) format, Then, it uses this data to speed up code generation.

This is a sample:

#pragma once

#ifndef __Asx_Core_Prerequisites_H__
#define __Asx_Core_Prerequisites_H__

//Include common headers
#include "BaseConfig.h"
#include "Atomic.h"
#include "Limits.h"
#include "DebugDefs.h"
#include "CommonApi.h"
#include "Algorithms.h"
#include "HashCode.h"
#include "MemoryOverride.h"
#include "Result.h"
#include "ThreadBase.h"
//Others...

namespace Asx
{

    //Forward declare common types
    class String;
    class UnicodeString;

    //Declare global constants
    enum : Enum
    {
        ID_Auto     = Limits<Enum>::Max_Value,
        ID_None     = 0
    };

    enum : Size_t
    {
        Max_Size            = Limits<Size_t>::Max_Value,
        Invalid_Position    = Limits<Size_t>::Max_Value
    };

    enum : Uint
    {
        Timeout_Infinite    = Limits<Uint>::Max_Value
    };

    //Other things...

}

#endif /* __Asx_Core_Prerequisites_H__ */

In project, when PCH is used, every source file usually contains #include to this file (I don't know about others, but in VC++ this actually a requirement - every source attached to project configured for using PCH, must start with: #include PrecompiledHedareName.h). Configuration of precompiled headers is very platform-dependent and beyond the scope of this answer.

Note one important matter: things, that are defined/included in PCH should be changed only when absolutely necessary - every chnge can cause recompilation of whole project (and other depended modules)!

More about PCH:

Wiki
GCC Doc
Microsoft Doc

2. Forward declarations.

When you don't need whole class definition, forward declare it to remove unnecessary dependencies in your code. This also implicates extensive use of pointers and references when possible. Example:

#include "BigDataType.h"

class Sample
{
protected:
    BigDataType _data;
};

Do you really need to store _data as value? Why not this way:

class BigDataType; //That's enough, #include not required

class Sample
{
protected:
    BigDataType* _data; //So much better now
};

This is especially profitable for large types.

3. Do not overuse templates.

Meta-programming is a very powerful tool in developer's toolbox. But don't try to use them, when they are not necessary.

They are great for things like traits, compile-time evaluation, static reflection and so on. But they introduce a lot of troubles:

  • Error messages - if you have ever seen errors caused by improper usage of std:: iterators or containers (especially the complex ones, like std::unordered_map), than you know what is this all about.
  • Readability - complex templates can be very hard to read/modify/maintain.
  • Quirks - many techniques, templates are used for, are not so well-known, so maintenance of such code can be even harder.
  • Compile time - the most important for us now:

Remember, if you define function as:

template <class Tx, class Ty>
void sample(const Tx& xv, const Ty& yv)
{
    //body
}

it will be compiled for each exclusive combination of Tx and Ty. If such function is used often (and for many such combinations), it can really slow down compilation process. Now imagine, what will happen, if you start to overuse templating for whole classes...

4. Using PIMPL idiom.

This is a very useful technique, that allows us to:

  • hide implementation details
  • speed up code generation
  • easy updates, without breaking client code

How does it work? Consider class, that contain a lot of data (for example, representing person). It could look like this:

class Person
{
protected:
    string name;
    string surname;
    Date birth_date;
    Date registration_date;
    string email_address;
    //and so on...
};

Our application evolves and we need to extend/change Person definition. We add some new fields, remove others... and everything crashes: size of Person changes, names of fields change... cataclysm. In particular, every client code, that depends on Person's definition needs to be changed/updated/fixed. Not good.

But we can do it the smart way - hide the details of Person:

class Person
{
protected:
    class Details;
    Details* details;
};

Now, we do few nice things:

  • client cannot create code, that depends on how Person is defined
  • no recompilation needed as long as we don't modify public interface used by client code
  • we reduce the compilation time, because definitions of string and Date no longer need to be present (in previous version, we had to include appropriate headers for these types, that adds additional dependencies).

5. #pragma once directive.

Although it may give no speed boost, it is clearer and less error-prone. It is basically the same thing as using include guards:

#ifndef __Asx_Core_Prerequisites_H__
#define __Asx_Core_Prerequisites_H__

//Content

#endif /* __Asx_Core_Prerequisites_H__ */

It prevents from multiple parses of the same file. Although #pragma once is not standard (in fact, no pragma is - pragmas are reserved for compiler-specific directives), it is quite widely supported (examples: VC++, GCC, CLang, ICC) and can be used without worrying - compilers should ignore unknown pragmas (more or less silently).

6. Unnecessary dependencies elimination.

Very important point! When code is being refactored, dependencies often change. For example, if you decide to do some optimizations and use pointers/references instead of values (vide point 2 and 4 of this answer), some includes can become unnecessary. Consider:

#include "Time.h"
#include "Day.h"
#include "Month.h"
#include "Timezone.h"

class Date
{
protected:
    Time time;
    Day day;
    Month month;
    Uint16 year;
    Timezone tz;

    //...
};

This class has been changed to hide implementation details:

//These are no longer required!
//#include "Time.h"
//#include "Day.h"
//#include "Month.h"
//#include "Timezone.h"

class Date
{
protected:
    class Details;
    Details* details;

    //...
};

It is good to track such redundant includes, either using brain, built-in tools (like VS Dependency Visualizer) or external utilities (for example, GraphViz).

Visual Studio has also a very nice option - if you click with RMB on any file, you will see an option 'Generate Graph of include files' - it will generated a nice, readable graph, that can be easily analyzed and used to track unnecessary dependencies.

Sample graph, generated inside my String.h file:

Sample Graph

Mateusz Grzejek
  • 10,635
  • 3
  • 30
  • 47
  • This is on its way to being a good answer, but in its current form it is basically a collection of link-only, It would be worth adding a short description of pre compiled headers, and expand points 3 & 4. – mjs Apr 09 '15 at 14:18
  • 2
    I suggest mentioning that the `#pragma once` is compiler specific. The [traditional include guards](http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CB8QFjAA&url=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FInclude_guard&ei=KbYmVdfZKpXdoASY2IDwDQ&usg=AFQjCNE-LLUHx6rseVMmcC0fY3qeVOBVDA&sig2=zkpDxMDpD-EJc8mx87xxdQ&bvm=bv.90491159,d.cGU) are more portable and not compiler dependent. – Thomas Matthews Apr 09 '15 at 17:26
  • @ThomasMatthews I already included that - `Although once is not standard (pragmas are reserved for compiler-specific directives), it is quite widely supported (examples: VC++, GCC, CLang, ICC)`. – Mateusz Grzejek Apr 09 '15 at 17:39
  • 1
    Precompiled headers are not normal include files. Depending on the compiler, they are somewhere between a collection of preprocessed headers and an internal near-binary compiler representation of the header code. – rubenvb Apr 10 '15 at 06:40
  • I really need to search how to make a PCH in a makefile, doesn't seem very hard to do, but the doc you linked really needs a TLDR. – jokoon Apr 10 '15 at 07:12
4

As Mr. Yellow indicated in a comment, one of the best ways to improve compile times is to pay careful attention to your use of header files. In particular:

  • Use precompiled headers for any header that you don't expect to change including operating system headers, third party library headers, etc.
  • Reduce the number of headers included from other headers to the minimum necessary.
    • Determine whether a include is needed in the header or whether it can be moved to cpp file. This sometimes causes a ripple effect because someone else was depending on you to include the header for it, but it is better in the long term to move the include to the place where it's actually needed.
    • Using forward declared classes, etc. can often eliminate the need to include the header in which that class is declared. Of course, you still need to include the header in the cpp file, but that only happens once, as opposed to happening every time the corresponding header file is included.
  • Use #pragma once (if it is supported by your compiler) rather than include guard symbols. This means the compiler does not even need to open the header file to discover the include guard. (Of course many modern compilers figure that out for you anyway.)

Once you have your header files under control, check your make files to be sure you no longer have unnecessary dependencies. The goal is to rebuild everything you need to, but no more. Sometimes people err on the side of building too much because that is safer than building too little.

Dale Wilson
  • 8,374
  • 2
  • 27
  • 47
1

If you've tried all of the above, there's a commercial product that does wonders, assuming you have some available PCs on your LAN. We used to use it at a previous job. It's called Incredibuild (www.incredibuild.com) and it shrunk our build time from over an hour (C++) to about 10 minutes. From their website:

IncrediBuild accelerates build time through efficient parallel computing. By harnessing idle CPU resources on the network, IncrediBuild transforms a network of PCs and servers into a private computing cloud that can best be described as a “virtual supercomputer.” Processes are distributed to remote CPU resources for parallel processing, dramatically shortening build time up by to 90% or more.

gstar
  • 146
  • 3
  • 6
0

Another point that's not mentioned in the other answers: Templates. Templates can be a nice tool, but they have fundamental drawbacks:

  • The template, and all the templates it depends upon, must be included. Forward declarations don't work.

  • Template code is frequently compiled several times. In how many .cpp files do you use an std::vector<>? That is how many times your compiler will need to compile it!

    (I'm not advocating against the use of std::vector<>, on the contrary you should use it frequently; it's simply an example of a really frequently used template here.)

  • When you change the implementation of a template, you must recompile everything that uses that template.

With template heavy code, you often have relatively few compilation units, but each of them is huge. Of course, you can go all-template and have only a single .cpp file that pulls in everything. This would avoid multiple compiling of template code, however it renders make useless: any compilation will take as long as a compilation after a clean.

I would recommend going the opposite direction: Avoid template-heavy or template-only libraries, and avoid creating complex templates. The more interdependent your templates become, the more repeated compilation is done, and the more .cpp files need to be rebuilt when you change a template. Ideally any template you have should not make use of any other template (unless that other template is std::vector<>, of course...).

cmaster - reinstate monica
  • 33,875
  • 7
  • 50
  • 100
  • Bad advice IMO. There's little wrong with using `std::vector` in a class implementation. That doesn't add much to its compilation time, and it doesn't make the class implementation bigger. A far more reasonable advice would be to keep templates out of header files, because it's that particular practice which can cause big Translation Units. – MSalters Apr 09 '15 at 16:46
  • @MSalters: You didn't read my answer carefully: "I'm not advocating against the use of `std::vector<>`..." The STL is the one big template library that one should never avoid using. – cmaster - reinstate monica Apr 09 '15 at 17:00