267

std::unique_ptr has support for arrays, for instance:

std::unique_ptr<int[]> p(new int[10]);

but is it needed? probably it is more convenient to use std::vector or std::array.

Do you find any use for that construct?

ks1322
  • 29,461
  • 12
  • 91
  • 140
fen
  • 8,812
  • 5
  • 29
  • 53
  • 7
    For completeness, I should point out that there is no `std::shared_ptr`, but there should be, and probably will be in C++14 if anyone could be bothered to write up a proposal. In the mean time, there's always `boost::shared_array`. – Pseudonym May 30 '13 at 00:23
  • 20
    `std::shared_ptr` is in c++17 now. – 陳 力 Nov 01 '18 at 02:04
  • You can find multiple ways to do anything on a computer. This construct does have use, especially in a hot path, because it eradicates the overhead of container operations if you know exactly how to target your array. Additionally, it makes character arrays without any doubt of contiguous storage. – kevr Apr 19 '20 at 17:13
  • 1
    I found this useful for interoperating with C structs where a member of the struct determines its size. I want the memory automatically deallocated but there is no type of the right size for deallocation, so I used a char array. – fuzzyTew Jul 05 '20 at 17:08

17 Answers17

283

Some people do not have the luxury of using std::vector, even with allocators. Some people need a dynamically sized array, so std::array is out. And some people get their arrays from other code that is known to return an array; and that code isn't going to be rewritten to return a vector or something.

By allowing unique_ptr<T[]>, you service those needs.

In short, you use unique_ptr<T[]> when you need to. When the alternatives simply aren't going to work for you. It's a tool of last resort.

Nicol Bolas
  • 378,677
  • 53
  • 635
  • 829
  • 35
    @NoSenseEtAl: I'm not sure what part of "some people aren't allowed to do that" eludes you. Some projects have very specific requirements, and among them may be "you don't get to use `vector`". You can argue whether those are reasonable requirements or not, but you can't deny that they *exist*. – Nicol Bolas May 30 '13 at 15:48
  • 6
    ok, i see, i thought by luxury you meant perf considerations, not ignorance :) – NoSenseEtAl May 30 '13 at 16:27
  • btw im not downvoter , though like I said i find luxury to be unfortunate word here :) nice answer in general – NoSenseEtAl May 31 '13 at 08:07
  • 27
    There is no reason in the world why someone wouldn't be able to use `std::vector` if they can use `std::unique_ptr`. – Miles Rout Apr 29 '14 at 14:48
  • 4
    I'm curious if the comment *some projects have very specific requirements, and among them may be "you don't get to use `vector`"* is purely theoretical, or if it's based on known situations, and if such a restriction might be reasonable. – Dan Nissenbaum Jun 13 '14 at 11:28
  • 1
    I'd say a more practical example would be using a function like: `foo(T *&buf_to_init) // uses new [] to initialize the pointer`. In this case I can wrap that pointer in a `unique_ptr` after calling the function. – Ed S. Jul 08 '14 at 04:57
  • 4
    The only reason not to use vector is if you don't want to pay the cost of value-initializing an array. – Charles Salvia Jul 20 '14 at 19:27
  • 3
    Or if you want to make sure as a matter of policy that no one forgets an & or std::move and accidentally *copies* the entire thing. Isn't that what type-checking is for?? – Jimmy Hartzell Aug 28 '14 at 16:17
  • 79
    here's a reason to not use vector: sizeof(std::vector) == 24; sizeof(std::unique_ptr) == 8 – Arvid Sep 12 '14 at 22:34
  • 21
    @DanNissenbaum These projects exist. Some industries that are under very hard scrutiny, like for example aviation or defense, the standard library is off-limits because it is difficult to verify and prove that it is correct to whatever governing body sets the regulations. You may argue that the standard library is well tested and I would agree with you but you and I don't make the rules. – Emily L. Sep 18 '14 at 13:37
  • 23
    @DanNissenbaum Also some hard real-time systems are not allowed to use dynamic memory allocation at all as the delay a system call causes might not be theoretically bounded and you can not prove the real-time behavior of the program. Or the bound may be too large which breaks your WCET limit. Although not applicable here, as they wouldn't use `unique_ptr` either but those kinds of projects really do exist. – Emily L. Sep 18 '14 at 13:42
  • 1
    I don't think it's a question of whether someone can use std::vector over std::unique_ptr at all. I use std::unique_ptr to certain arrays because I can be sure that if an exception is thrown, it's going to be cleaned up properly. – The Welder Oct 23 '14 at 10:45
  • 6
    Lots of preexisting libraries and OS api's only accept and output data with pointers to arrays. Sometimes the size is not predetermined, so dynamic allocation of that array is needed. When it is, std::unique_ptr is a great way to make it exception safe. – VoidStar Nov 06 '14 at 02:38
  • 3
    @MilesRout: You can't use vector if T is not copy-constructible (for example, push_back and emplace_back don't compile), but you can still use unique_ptr. – Joshua Chia Apr 04 '15 at 09:42
  • 3
    @Syncopated: FYI: You *can* use `vector` even if `T` is not copy-constructible. You cannot *call* `vector::push_back` with such a `T`, but you can most assuredly can use `vector::emplace_back`. – Nicol Bolas Nov 30 '15 at 22:30
  • 3
    @MilesRout, @DanNissenbaum - You might also have an API constraint, like you can't change a signature. So you want to use the auto pointer for RIAA and exception safety, and it finish the function with a `release()` to return the pointer to the caller. The code I am looking at does the same because it can't break an API for external callers. – jww Nov 15 '16 at 12:58
  • 1
    Late to the party, but another scenario I had recently was where I didn't want to allow resizing, but wanted non-const access to the objects. A `const unique_ptr` was the solution – Baruch Mar 09 '17 at 07:48
  • As an example of a project where "though shall not use `std::vector`" is a requirement, I have worked on a software development project for an embedded system for which we could not use `std::vector` because of the non-deterministic add/delete time (when it dynamically resizes itself). Basically every operation needed to take a known, always consistent amount of time, and adding/deleting elements from `std::vector` does not. – Engineero Oct 31 '17 at 16:34
  • 5
    @Engineero: "*adding/deleting elements from std::vector does not.*" Actually, it does. So long as you reserve sufficient space for all of the elements you want to add, `vector::push_back` will take a consistent amount of time. – Nicol Bolas Oct 31 '17 at 16:58
  • 1
    Good point. Maybe it was the costlier initialization, or just some hardcore preferences carried over from C that dictated the policy. – Engineero Oct 31 '17 at 17:02
  • @NicolBolas on some embedded platform allocating memory doesn't allocate it immediately until it accessed, because they they still have memory model similar to what IBM PC had in XMS times. I remember times , even on PC, when the iteration through large array hadn't consistent time cost until whole array was initialized once. – Swift - Friday Pie Oct 01 '18 at 08:24
  • `std::unique_ptr p(reinterpret_cast(malloc(7*sizeof(int))), free);` - assuming the `malloc` is done within some C-API without C++ equivalent... – Aconcagua Nov 19 '18 at 06:40
  • "It's a tool of last resort". Well, in most cases, yes, but recently I had to replace my vector with a unique_ptr array for two reasons: to have uninitialized data and to prevent usage of "resize" method incorrectly. My resize had to behave slightly differently (in a more restricted way) and writing a custom allocator would be an overkill. – reconn Apr 12 '20 at 07:29
147

There are tradeoffs, and you pick the solution which matches what you want. Off the top of my head:

Initial size

  • vector and unique_ptr<T[]> allow the size to be specified at run-time
  • array only allows the size to be specified at compile time

Resizing

  • array and unique_ptr<T[]> do not allow resizing
  • vector does

Storage

  • vector and unique_ptr<T[]> store the data outside the object (typically on the heap)
  • array stores the data directly in the object

Copying

  • array and vector allow copying
  • unique_ptr<T[]> does not allow copying

Swap/move

  • vector and unique_ptr<T[]> have O(1) time swap and move operations
  • array has O(n) time swap and move operations, where n is the number of elements in the array

Pointer/reference/iterator invalidation

  • array ensures pointers, references and iterators will never be invalidated while the object is live, even on swap()
  • unique_ptr<T[]> has no iterators; pointers and references are only invalidated by swap() while the object is live. (After swapping, pointers point into to the array that you swapped with, so they're still "valid" in that sense.)
  • vector may invalidate pointers, references and iterators on any reallocation (and provides some guarantees that reallocation can only happen on certain operations).

Compatibility with concepts and algorithms

  • array and vector are both Containers
  • unique_ptr<T[]> is not a Container

I do have to admit, this looks like an opportunity for some refactoring with policy-based design.

Pseudonym
  • 2,091
  • 1
  • 10
  • 16
  • 2
    I am not sure I understand what you mean in the context of _pointer invalidation_. Is this about pointers to the objects themselves, or pointers to the elements? Or something else? What kind of guarantee do you get from an array that you don't get from a vector? – jogojapan May 29 '13 at 02:52
  • 3
    Suppose that you have an iterator, a pointer, or a reference to an element of a `vector`. Then you increase the size or capacity of that `vector` such that it forces a reallocation. Then that iterator, pointer or reference no longer points to that element of the `vector`. This is what we mean by "invalidation". This problem doesn't happen to `array`, because there is no "reallocation". Actually, I just noticed a detail with that, and I've edited it to suit. – Pseudonym May 29 '13 at 03:33
  • 2
    Ok, there can't be invalidation as a result of reallocation in an array or `unique_ptr` because there is no reallocation. But of course, when the array goes out of scope, pointers to specific elements will still be invalidated. – jogojapan May 29 '13 at 03:38
  • Yes, all bets are off if the object is no longer live. – Pseudonym May 29 '13 at 03:43
  • Although `unique_ptr` has no iterator built-in, you can still iterate over it like a normal `T[]`, which means you need a size of the range to be iterated over. – rubenvb May 29 '13 at 08:43
  • 3
    @rubenvb Sure you can, but you can't (say) use range-based for loops directly. Incidentally, unlike a normal `T[]`, the size (or equivalent information) must be hanging around somewhere for `operator delete[]` to correctly destroy the elements of the array. It'd be nice if the programmer had access to that. – Pseudonym May 30 '13 at 00:09
  • @Pseudonym Though it cannot hang around. Although it must know how to release the memory that is available for the `new[]`ed array, it doesn't actually have to allocate exactly that. It may overshoot to align the memory, it may share memory with other objects in a block that just counts the amount of blocks that were allocated it in. Or it may not do any of that. My point is that allocation can be done in many ways, and forcing the allocator to keep track of the size of the block you allocated although logical may not be optimal. C++ says, don't pay for what you don't use. – Aidiakapi May 27 '15 at 14:24
  • 1
    @Aidiakapi C++ requires that if you `delete[]` an array of objects which have destructors, the destructors get run. For that reason, the C++ run time already needs to know the actual size of most arrays that have been allocated that way. Now, decent C++ implementations do optimise the destructors out if the objects in the array have no destructor (e.g. a basic type) or a destructor which does nothing. However, they typically don't optimise the memory allocator for this case. It could happen, but it doesn't. So the size information is there. – Pseudonym May 28 '15 at 02:31
  • @Pseudonym But that in no way means that it has to have this information readily available. Like I said, my point isn't that the information isn't there with most allocators/default libraries, I just stated that it doesn't have to be. Afaik in VC++ the size is actually as an integer before the array in memory, you could literally access it, but that's undefined behavior. There could be algorithms out there that do have the information, but access to it will be O(n), similar to the length of a linked list or a C style string. O(n) for the length is no problem since destruction is O(n) anyways. – Aidiakapi May 28 '15 at 20:17
  • 1
    In addition to that, if the object is trivially destructable, it doesn't have to keep track of the length at all, it might optimize this out and just store the block size instead. All I am trying to say is that it's up to the allocated to decide what is efficient, and forcing an allocator to provide this information, even though there's plenty of cases where it's unnecessary, limits this. – Aidiakapi May 28 '15 at 20:19
  • 1
    All I'm saying is that in the vast majority of cases, the implementation needs to retain array size information to implement C++ semantics correctly. There's no reason why the user couldn't have access to that information, and it would still obey the rule that you don't pay for what you don't use. – Pseudonym May 28 '15 at 23:49
  • A great answer, but you're missing another option - `unique_ptr>` :) – Violet Giraffe Dec 20 '19 at 14:05
  • True! Arguably the worst of both worlds. – Pseudonym Dec 21 '19 at 11:09
84

One reason you might use a unique_ptr is if you don't want to pay the runtime cost of value-initializing the array.

std::vector<char> vec(1000000); // allocates AND value-initializes 1000000 chars

std::unique_ptr<char[]> p(new char[1000000]); // allocates storage for 1000000 chars

The std::vector constructor and std::vector::resize() will value-initialize T - but new will not do that if T is a POD.

See Value-Initialized Objects in C++11 and std::vector constructor

Note that vector::reserve is not an alternative here: Is accessing the raw pointer after std::vector::reserve safe?

It's the same reason a C programmer might choose malloc over calloc.

Community
  • 1
  • 1
Charles Salvia
  • 48,775
  • 12
  • 118
  • 138
  • But this reason is [not the only solution](http://stackoverflow.com/a/15220853/673852). – Ruslan Aug 23 '16 at 13:33
  • @Ruslan In the linked solution the elements of the dynamic array are still value-initialised, but the value initialisation does nothing. I would agree that an optimiser that fails to realise that doing nothing 1000000 times can be implemented by no code is not worth a dime, but one might prefer not to depend on this optimisation at all. – Marc van Leeuwen Mar 25 '17 at 13:30
  • 1
    yet another possibility is to provide to `std::vector` a [custom allocator](https://stackoverflow.com/a/15966795/1023390) which avoids construction of types which are `std::is_trivially_default_constructible` and destruction of objects which are `std::is_trivially_destructible`, though strictly this violates the C++ standard (since such types are not default initialised). – Walter May 25 '17 at 21:37
  • Also `std::unique_ptr` doesn't provide any bound checking contrary to a lot of `std::vector` implementations. – diapir Dec 26 '17 at 17:09
  • @diapir It's not about the implementation: `std::vector` is required by the Standard to check bounds in `.at()`. I guess you meant that some implementations have debug modes that will check in `.operator[]` too, but I consider that to be useless for writing good, portable code. – underscore_d Jun 26 '20 at 08:26
31

An std::vector can be copied around, while unique_ptr<int[]> allows expressing unique ownership of the array. std::array, on the other hand, requires the size to be determined at compile-time, which may be impossible in some situations.

Andy Prowl
  • 114,596
  • 21
  • 355
  • 432
  • 2
    Just because something *can* be copied around doesn't mean it has to be. – Nicol Bolas May 23 '13 at 10:41
  • 4
    @NicolBolas: I don't understand. One may want to prevent that for the same reason why one would use `unique_ptr` instead of `shared_ptr`. Am I missing something? – Andy Prowl May 23 '13 at 10:42
  • 4
    `unique_ptr` does more than just prevent accidental misuse. It's also smaller and lower overhead than `shared_ptr`. The point being that, while it's nice to have semantics in a class that prevent "misuse", that's not the only reason to use a particular type. And `vector` is far more useful as an array storage than `unique_ptr`, if for no reason other than the fact that it has a *size*. – Nicol Bolas May 23 '13 at 10:43
  • 1
    @NicolBolas: I thought the main reason for using `unique_ptr` rather than `shared_ptr` was that one models unique ownership and the other does not. While it is true that `unique_ptr` has zero overhead, I would tend to see ownership semantics as the main discriminant here. – Andy Prowl May 23 '13 at 10:47
  • 3
    I thought I made the point clear: there are *other reasons* to use a particular type than that. Just like there are reasons to prefer `vector` over `unique_ptr` where possible, instead of just saying, "you can't copy it" and therefore pick `unique_ptr` when you don't want copies. Stopping someone from doing the wrong thing is not necessarily the most important reason to pick a class. – Nicol Bolas May 23 '13 at 10:49
  • @NicolBolas: Perhaps you made the point clear, and I am just not able to understand it. I'm not saying it is a limitation of your explanation. Forgive if I'm still learning, I just want to understand. – Andy Prowl May 23 '13 at 10:58
  • 8
    `std::vector` has more overhead than a `std::unique_ptr` -- it uses ~3 pointers instead of ~1. `std::unique_ptr` blocks copy construction but enables move construction, which if semantically the data you are working with can only be moved but not copied, infects the `class` containing the data. Having an operation on data that is *not valid* actually makes your container class worse, and "just don't use it" does not wash away all sins. Having to put every instance of your `std::vector` into a class where you manually disable `move` is a headache. `std::unique_ptr` has a `size`. – Yakk - Adam Nevraumont May 23 '13 at 13:50
  • 1
    @Yakk: I'm not sure how to take your comment, is it meant to correct something in my answer? – Andy Prowl May 23 '13 at 13:52
  • Hmm, I should have included @NicolBolas in it. I read through the comments, and was commenting on them. :) – Yakk - Adam Nevraumont May 23 '13 at 13:57
  • Heck, std::unique_ptr is perhaps what you're looking for if you want a unique pointer to a dynamically re-sizable array. – Miles Rout Apr 29 '14 at 14:49
  • @MilesRout Well, of course, but then you have 2 levels of pointer indirection, which will hurt readability and performance, and 'needing' such a thing probably indicates a strange design. – underscore_d Jun 26 '20 at 08:28
26

Scott Meyers has this to say in Effective Modern C++

The existence of std::unique_ptr for arrays should be of only intellectual interest to you, because std::array, std::vector, std::string are virtually always better data structure choices than raw arrays. About the only situation I can conceive of when a std::unique_ptr<T[]> would make sense would be when you're using a C-like API that returns a raw pointer to a heap array that you assume ownership of.

I think that Charles Salvia's answer is relevant though: that std::unique_ptr<T[]> is the only way to initialise an empty array whose size is not known at compile time. What would Scott Meyers have to say about this motivation for using std::unique_ptr<T[]>?

newling
  • 534
  • 4
  • 9
  • 6
    It sounds like he simply didn't envision a few use cases, namely a buffer whose size is fixed but unknown at compile time, and/or a buffer for which we don't allow copies. There's also efficiency as a possible reason to prefer it to `vector` https://stackoverflow.com/a/24852984/2436175. – Antonio Jul 17 '18 at 21:43
17

Contrary to std::vector and std::array, std::unique_ptr can own a NULL pointer.
This comes in handy when working with C APIs that expect either an array or NULL:

void legacy_func(const int *array_or_null);

void some_func() {    
    std::unique_ptr<int[]> ptr;
    if (some_condition) {
        ptr.reset(new int[10]);
    }

    legacy_func(ptr.get());
}
george
  • 667
  • 1
  • 8
  • 15
10

I have used unique_ptr<char[]> to implement a preallocated memory pools used in a game engine. The idea is to provide preallocated memory pools used instead of dynamic allocations for returning collision requests results and other stuff like particle physics without having to allocate / free memory at each frame. It's pretty convenient for this kind of scenarios where you need memory pools to allocate objects with limited life time (typically one, 2 or 3 frames) that do not require destruction logic (only memory deallocation).

Engineero
  • 10,387
  • 3
  • 41
  • 65
Simon Ferquel
  • 356
  • 1
  • 3
10

A common pattern can be found in some Windows Win32 API calls, in which the use of std::unique_ptr<T[]> can come in handy, e.g. when you don't exactly know how big an output buffer should be when calling some Win32 API (that will write some data inside that buffer):

// Buffer dynamically allocated by the caller, and filled by some Win32 API function.
// (Allocation will be made inside the 'while' loop below.)
std::unique_ptr<BYTE[]> buffer;

// Buffer length, in bytes.
// Initialize with some initial length that you expect to succeed at the first API call.
UINT32 bufferLength = /* ... */;

LONG returnCode = ERROR_INSUFFICIENT_BUFFER;
while (returnCode == ERROR_INSUFFICIENT_BUFFER)
{
    // Allocate buffer of specified length
    buffer.reset( BYTE[bufferLength] );
    //        
    // Or, in C++14, could use make_unique() instead, e.g.
    //
    // buffer = std::make_unique<BYTE[]>(bufferLength);
    //

    //
    // Call some Win32 API.
    //
    // If the size of the buffer (stored in 'bufferLength') is not big enough,
    // the API will return ERROR_INSUFFICIENT_BUFFER, and the required size
    // in the [in, out] parameter 'bufferLength'.
    // In that case, there will be another try in the next loop iteration
    // (with the allocation of a bigger buffer).
    //
    // Else, we'll exit the while loop body, and there will be either a failure
    // different from ERROR_INSUFFICIENT_BUFFER, or the call will be successful
    // and the required information will be available in the buffer.
    //
    returnCode = ::SomeApiCall(inParam1, inParam2, inParam3, 
                               &bufferLength, // size of output buffer
                               buffer.get(),  // output buffer pointer
                               &outParam1, &outParam2);
}

if (Failed(returnCode))
{
    // Handle failure, or throw exception, etc.
    ...
}

// All right!
// Do some processing with the returned information...
...
Mr.C64
  • 37,988
  • 11
  • 76
  • 141
10

In a nutshell: it's by far the most memory-efficient.

A std::string comes with a pointer, a length, and a "short-string-optimization" buffer. But my situation is I need to store a string that is almost always empty, in a structure that I have hundreds of thousands of. In C, I would just use char *, and it would be null most of the time. Which works for C++, too, except that a char * has no destructor, and doesn't know to delete itself. By contrast, a std::unique_ptr<char[]> will delete itself when it goes out of scope. An empty std::string takes up 32 bytes, but an empty std::unique_ptr<char[]> takes up 8 bytes, that is, exactly the size of its pointer.

The biggest downside is, every time I want to know the length of the string, I have to call strlen on it.

Engineero
  • 10,387
  • 3
  • 41
  • 65
jorgbrown
  • 1,628
  • 12
  • 20
9

I faced a case where I had to use std::unique_ptr<bool[]>, which was in the HDF5 library (A library for efficient binary data storage, used a lot in science). Some compilers (Visual Studio 2015 in my case) provide compression of std::vector<bool> (by using 8 bools in every byte), which is a catastrophe for something like HDF5, which doesn't care about that compression. With std::vector<bool>, HDF5 was eventually reading garbage because of that compression.

Guess who was there for the rescue, in a case where std::vector didn't work, and I needed to allocate a dynamic array cleanly? :-)

The Quantum Physicist
  • 21,050
  • 13
  • 77
  • 143
8

I can't disagree with the spirit of the accepted answer strongly enough. "A tool of last resort"? Far from it!

The way I see it, one of the strongest features of C++ compared to C and to some other similar languages is the ability to express constraints so that they can be checked at compile time and accidental misuse can be prevented. So when designing a structure, ask yourself what operations it should permit. All the other uses should be forbidden, and it's best if such restrictions can be implemented statically (at compile time) so that misuse results in a compilation failure.

So when one needs an array, the answers to the following questions specify its behavior: 1. Is its size a) dynamic at runtime, or b) static, but only known at runtime, or c) static and known at compile time? 2. Can the array be allocated on the stack or not?

And based on the answers, this is what I see as the best data structure for such an array:

       Dynamic     |   Runtime static   |         Static
Stack std::vector      unique_ptr<T[]>          std::array
Heap  std::vector      unique_ptr<T[]>     unique_ptr<std::array>

Yep, I think unique_ptr<std::array> should also be considered, and neither is a tool of last resort. Just think what fits best with your algorithm.

All of these are compatible with plain C APIs via the raw pointer to data array (vector.data() / array.data() / uniquePtr.get()).

P. S. Apart from the above considerations, there's also one of ownership: std::array and std::vector have value semantics (have native support for copying and passing by value), while unique_ptr<T[]> can only be moved (enforces single ownership). Either can be useful in different scenarios. On the contrary, plain static arrays (int[N]) and plain dynamic arrays (new int[10]) offer neither and thus should be avoided if possible - which should be possible in the vast majority of cases. If that wasn't enough, plain dynamic arrays also offer no way to query their size - extra opportunity for memory corruptions and security holes.

Violet Giraffe
  • 29,070
  • 38
  • 172
  • 299
6

One additional reason to allow and use std::unique_ptr<T[]>, that hasn't been mentioned in the responses so far: it allows you to forward-declare the array element type.

This is useful when you want to minimize the chained #include statements in headers (to optimize build performance.)

For instance -

myclass.h:

class ALargeAndComplicatedClassWithLotsOfDependencies;

class MyClass {
   ...
private:
   std::unique_ptr<ALargeAndComplicatedClassWithLotsOfDependencies[]> m_InternalArray;
};

myclass.cpp:

#include "myclass.h"
#include "ALargeAndComplicatedClassWithLotsOfDependencies.h"

// MyClass implementation goes here

With the above code structure, anyone can #include "myclass.h" and use MyClass, without having to include the internal implementation dependencies required by MyClass::m_InternalArray.

If m_InternalArray was instead declared as a std::array<ALargeAndComplicatedClassWithLotsOfDependencies>, or a std::vector<...>, respectively - the result would be attempted usage of an incomplete type, which is a compile-time error.

  • For this particular use case, I'd opt for the Pimpl pattern to break dependence - if it's used only privately, then the definition can be deferred until the class methods are implemented; if it's used publicly, then the users of the class should have already had the concrete knowledge about `class ALargeAndComplicatedClassWithLotsOfDependencies`. So logically you shouldn't run into such scenarios. –  Apr 11 '18 at 14:32
  • For me it is more elegant to hold one/a few/an array of internal objects via unique_ptr (and thus exposing names of the internal types) instead of introducing one more abstraction level with typical PIMPL. So this answer is valuable. Another note: one must wrap his internal type if it is not default-destructible when it is desired to use it with unique_ptr. – Kokos Feb 25 '21 at 19:16
3
  • You need your structure to contain just a pointer for binary-compatibility reasons.
  • You need to interface with an API that returns memory allocated with new[]
  • Your firm or project has a general rule against using std::vector, for example, to prevent careless programmers from accidentally introducing copies
  • You want to prevent careless programmers from accidentally introducing copies in this instance.

There is a general rule that C++ containers are to be preferred over rolling-your-own with pointers. It is a general rule; it has exceptions. There's more; these are just examples.

Engineero
  • 10,387
  • 3
  • 41
  • 65
Jimmy Hartzell
  • 301
  • 2
  • 4
3

To answer people thinking you "have to" use vector instead of unique_ptr I have a case in CUDA programming on GPU when you allocate memory in Device you must go for a pointer array (with cudaMalloc). Then, when retrieving this data in Host, you must go again for a pointer and unique_ptr is fine to handle pointer easily. The extra cost of converting double* to vector<double> is unnecessary and leads to a loss of perf.

2

They may be the rightest answer possible when you only get to poke a single pointer through an existing API (think window message or threading-related callback parameters) that have some measure of lifetime after being "caught" on the other side of the hatch, but which is unrelated to the calling code:

unique_ptr<byte[]> data = get_some_data();

threadpool->post_work([](void* param) { do_a_thing(unique_ptr<byte[]>((byte*)param)); },
                      data.release());

We all want things to be nice for us. C++ is for the other times.

Simon Buchan
  • 11,503
  • 2
  • 42
  • 51
2

unique_ptr<char[]> can be used where you want the performance of C and convenience of C++. Consider you need to operate on millions (ok, billions if you don't trust yet) of strings. Storing each of them in a separate string or vector<char> object would be a disaster for the memory (heap) management routines. Especially if you need to allocate and delete different strings many times.

However, you can allocate a single buffer for storing that many strings. You wouldn't like char* buffer = (char*)malloc(total_size); for obvious reasons (if not obvious, search for "why use smart ptrs"). You would rather like unique_ptr<char[]> buffer(new char[total_size]);

By analogy, the same performance&convenience considerations apply to non-char data (consider millions of vectors/matrices/objects).

Serge Rogatch
  • 11,119
  • 4
  • 58
  • 117
  • One not put them all in one big `vector`? The answer, I suppose, is because they will be zero-initialised when you create the buffer, whereas they won't be if you use `unique_ptr`. But this key nugget is missing from your answer. – Arthur Tacca Mar 16 '19 at 12:58
0

If you need a dynamic array of objects that are not copy-constructible, then a smart pointer to an array is the way to go. For example, what if you need an array of atomics.