Ponder the use of unique_ptr to enforce the Rule of Zero

Posted: April 12, 2014 in Programming Recipes
Tags: , , ,

I read the article “Enforcing the Rule of Zero” from latest Overload (of ACCU) and I’d like to point out something that I misapprehended at a first reading.

I’m not explaining what the Rule of Zero is. If you’ve never heard about it, I heartily suggest you to read Martinho Fernandes’ article. The rule states (basically): classes that have custom destructors, copy/move constructors or copy/move assignment operators should deal exclusively with ownership. This rule is an application of the Single Responsability Principle.

Back to the article, a clever point the author underlines is about polymorphic deletion: what to do when we want to support polymorphic deletion, or when our classes have virtual functions? Quoting Herb Sutter’s: If A is intended to be used as a base class, and if callers should be able to destroy polymorphically, then make A::~A public and virtual. Otherwise make it protected (and not-virtual).

For example:


struct A
{
   virtual void foo();
};

struct B : A
{
   void foo() override ;
   int i;
};

A* a = new B{}; 

delete a; // ops, undefined behavior

To correctly destroy B, A should have a virtual destructor:


struct A
{
   virtual ~A() {}
   virtual void foo();
};

Problem: now the compiler won’t automatically generate move operations for A. And, worse, in C++14 this deficiency is extended to copy operations as well. So, the following may solve all the problems:


struct A
{
   //A() = default; // if needed
   virtual ~A() = default;
   A(const A&)=default;
   A& operator=(const A&)=default;
   A(A&&)=default;
   A& operator=(A&&)=default;

   virtual void foo();
};

This is called the Rule of Five. The author then wisely proposes to follow the second part of the Rule of Zero, that is: “Use smart pointers & standard library classes for managing resources”. I add: or use a custom wrapper, if an STL’s class does not fit with your needs. The key point is: use abstraction, use RAII, use C++.

Then the author suggests to use a shared_ptr:

struct A
{
   virtual void foo() = 0;
};

struct B : A
{
   void foo() override {}
}

shared_ptr<A> ptr = make_shared<B>(); // fine

This works well and it avoids the (more verbose) Rule of Five.

Why shared_ptr and not simply unique_ptr? Let me remark the key point: the article never says “use any smart pointer”, because not every smart pointer would do. In fact a plain unique_ptr would not have worked.

One of the (many) differences between unique and shared pointers is the deleter. Both can have a custom deleter, but in  unique_ptr the deleter is part of the type signature (namely, unique_ptr<T, Deleter>) and for shared_ptr is not: shared_ptr has a type-erased deleter (and in fact also a type-erased allocator).

This implies that shared_ptr<B> has a deleter which internally remembers that the real type is B.

Consider the example I borrowed from the article: when make_shared<B> is invoked, a shared_ptr<B> is constructed as expected. shared_ptr<B> constructs a deleter which will delete the B*. Later, shared_ptr<B> is passed to shared_ptr<A>’s constructor: since A* and B* are compatible pointers, shared_ptr<B>’s deleter is “shared” as well. So even if the type of the object is  shared_ptr<A>, its deleter still remembers to delete a pointer of type B*.

Conversely, unique_ptr<T> has a default deleter of type std::default_deleter<T>. This is because the unique_ptr is intended to be use exactly as a light wrapper of a delete operation (with unique ownership – and not merely a scoped one). std::default_deleter<A> can be constructed from std::default_deleter<B> (since pointers are compatible), but it will delete an A*This is by design, since unique_ptr is intended to mimic closely operator new and delete, and the (buggy) code { A* p = new B; delete p; } will call delete(A*).

A possible way to work around this issue is to define a custom type-erased deleter for unique_ptr. We have several ways to implement this. One uses std::function and the other uses a technique described by Davide Di Gennaro in his book Advanced C++ Metaprogramming, called Trampolines. The idea for the latter was suggested by Davide, in a private conversation.

Using std::function is the simplest way:


template<typename T>
struct concrete_deleter
{
   void operator()(void* ptr) const
   {
      delete static_cast<T*>(ptr);
   }
};

...

unique_ptr<A, function<void(void*)> ptr { new B{}, concrete_deleter<B>{} };

Here we are using type-erasure, again. std::function<void(void*)> is a wrapper to any callable object supporting operator()(void*). When the unique_ptr has to be destroyed it calls the deleter, that is actually a concrete_deleter<B>. Internally, concrete_deleter<T> casts the void* to T*. To reduce verbosity and avoid errors like { new B(), concrete_deleter<A>{} }, you can write a make_unique-like factory.

The second solution is cunning and it implements type-erasure without a virtual function call (an optimization which goes beyond what std::function can really use):

struct concrete_deleter
{
   using del_t = void(*)(void*);
   del_t del_;

   template <typename T>
   static void delete_it(void *p)
   {
      delete static_cast<T*>(p);
   }

   template<typename T>
   concrete_deleter(T*)
     : del_(&delete_it<T>)
   {}

   void operator()(void* ptr) const
   {
     (*del_)(ptr);
   }
};
...

template<typename T, typename... Args>
auto make_unique_poly(Args&&... args)
{
   return unique_ptr<T, concrete_deleter>{new T{forward<Args>(args)...}, static_cast<T*>(nullptr)};
}

...
unique_ptr<A, concrete_deleter> ptr = make_unique_poly<B>();

The idea is storing the type information in the del_ function pointer, directly.

[Edit]

As many readers suggested, this can be done also by using a lambda. This way we get rid of the concrete_deleter support struct. I’m just afraid of this solution (that was in the first draft of this post) because if you use a generic type like the following:

unique_ptr<A, void(*)(void*)>

When you read the code you don’t know, at a first sight, what unique_ptr means. Worse, you may re-assign the unique_ptr to another one, passing a totally different lambda that observes the same signature.

Moreover, as Nicolas Silvagni commented, the size of unique_ptr<A, concrete_deleter> (or the one using a lambda) is greater than unique_ptr<A> (typically doubles – e.g. 8 bytes vs 16 bytes, on 64bit architecture). To prevent this, an intrusive approach is possible (read the comments for details). Alas, an intrusive approach does not follow the design of unique_ptr (and of STL wrappers in general) that is non-intrusive.

[/Edit]

So to sum up, here are the possible workarounds:

  1. Use shared_ptr (if possibile),
  2. Apply the Rule of Five (so declare a virtual destructor),
  3. Use unique_ptr with a custom deleter.

 

What do you think?

Acknowledgments

Many thanks to Davide Di Gennaro for reviewing this article and suggesting me some improvements. Some ideas arised from a private conversation we had.

Advertisements
Comments
  1. Andy Prowl says:

    Very nice article! Just a minor remark: in your first snippet, the comment “// ops, leak” sounds a bit optimistic: we actually have undefined behavior there (see 5.3.5/3) 🙂

  2. Thibaud Fortuna says:

    Maybe there is room in the C++ standard for a new smart pointer which can express unique ownership and type-erased deleter (and allocator) ?

    Yet I do not feel comfortable using the rule of zero with polymorphic deletion, because the user of the class has to be very careful to wrap every instance in a shared_ptr or an unique_ptr with a custom deleter.
    If I want to use the rule of zero with this kind of class (and I want to), I think I would make the constructor private, and use two static factories B::make_shared and B::make_unique to enforce the user to wrap it in the appropriate smart pointers.

    • Marco Arena says:

      Completely agree. If I create a base class A, I cannot be sure it will be used via shared_ptr (or via unique_ptr with polymorphic deletion support), so I would go with a virtual destructor. Another approach is to hide A – for example through factories (as you suggested). In this case you can use the unique_ptr with polymorphic deletion.

      Anyhow, the point of this post is: not every smart pointers “supports” polymorphic deletion for-free. Rule of Zero and polymorphic deletion are strictly related then be careful!

      Thanks for your comment.

  3. Enis Bayramoglu says:

    Hi Marco, instead of the concrete_deleter class, why don’t we simply use a stateless lambda?

    return unique_ptr{new T{forward(args)…},[](void * p){delete static_cast(p);};

    • Enis Bayramoglu says:

      I see that my comment has been filtered and the templates arguments removed; So I’ll rewrite it with the template arguments in regular parantheses:

      return unique_ptr(T,void(*)(void)){new T{forward(args)…},[](void * p){delete static_cast(T)(p);};

  4. Good article. One minor quibble I have is with the use of the phrase “without dynamic dispatching”. I guess it depends on your definition of this, but I would say that a function pointer is certainly evaluated at run-time. This may be more performant than a virtual function dispatch, but I would say still a form of dynamic dispatch. Certainly it seems unlikely that the compiler would be able to resolve the function at compile-time.

  5. Marco Arena says:

    Hi guys, thanks for your comments. The lambda approach is another (more compact) solution. Honestly my first draft of this article contained a lambda, very similar to the one you proposed.

    I’m afraid of the type:

    unique_ptr<Base, void(*)(void*)>
    

    because it is very generic. This means: you can instantiate it by passing any lambda. Ok we have the make_unique_poly utility but I consider using a concrete class here more readable (e.g. users know exactly what will happen) and (maybe) safer because I cannot re-assign the unique_ptr by passing any void(*)(void*) lambda. I totally forgot to discuss about this in the article, so your objections are licit 🙂

  6. Nicolas Silvagni says:

    Are you really trying to optimize a virtual function call at the destruction of an object ?

    Destroying an object at least involve freeing back memory to the allocator and there is no chance that the virtual call will dominate the cost here.

    Do not declare the base class destructor as virtual to prevent the need of bringing back everything with bloat line of “= default” is comprehensible, but you it cost you to store a non stateless deleter in the unique_ptr. It cost memory, still needing a function call to a dynamic address that will trash the CPU pipelining.

    Instead of impacting the runtime with a cost worse than with a virtual destructor you can just allocate a different entry in the virtual table and use that call to delete the object.

    Instead, you just have to allocate a different entry in the vtable and everything will be virtually as costly or as cheap ( pick the one you prefer ) as a virtual destructor :

    #include
    #include

    struct A {
    virtual void foo() {};
    virtual void DeleteMe() = 0;

    struct UniquePtrDeleter {
    void operator()( A* a ) const { a->DeleteMe(); }
    };
    };
    using PtrOfA = std::unique_ptr;

    static_assert(sizeof(PtrOfA)==sizeof(A*),”keep unique ptr as a single pointer” );

    struct B : public A {
    void DeleteMe() override { delete this; }
    ~B() { std::cout << "~B "; }
    };

    struct C : public A {
    void DeleteMe() override { delete this; }
    ~C() { std::cout << "~C "; }
    };

    int main() {
    PtrOfA a1 ( new B );
    PtrOfA a2 ( new C );
    }

    • Marco Arena says:

      Thanks for your comment. The aim of the latest approach is not optimization, I just mentioned it because a side-effect is not to call a virtual function. You’re right, with this concrete_deleter I store a non stateless deleter in the unique_ptr then I change the size of the unique_ptr and this can be a problem. If this is really an issue for your domain, use a stateless lambda instead – the same code suggested by some readers and discussed in the [edit] section of the post. I think this is better than what you suggest because you don’t need to change your class (this would be, actually, opposite to unique_ptr’s design, that is non-intrusive). Many thanks Nicolas, I’m going to add some notes about this in the post, asap.

      • Nicolas Silvagni says:

        The lambda version describes in the EDIT will store the function pointer too. The construct is valid because stateless lambda can decay to function pointer. The only difference is as you said, a less verbose type for the smart pointer.

        The only real stateless version, apart from the virtual destructor is the intrusive virtual function, as far as i can tell.

      • Marco Arena says:

        You’re right, my mistake, the lambda is obviously stored as a function pointer in this case.

      • Nicolas Silvagni says:

        Mistakes happens 🙂

        Something that is useful to mention, polymorphism is strongly related to a notion of entity and you are most likely to wants the deletion of copy and move operation to prevent unwanted slicing anyway. So the deletion of the special member functions is the wanted default behavior and rules of zero is intact.

      • Marco Arena says:

        This is interesting anyway. The Rule of Zero applied to polymorphic deletion is not really “self-contained”: when you design the base class A, if you don’t declare a virtual destructor (despite you know a pointer to it will be used to delete concrete classes) you are relying on the shared_ptr (or other wrapper). As a reader suggested, to force the usage of some wrapper, you need to encapsulate the construction of the class (creating a factory or something similar). Ok, this works and can be considered part of the design of the class (in a sense).

        I’m not very sure I’m happy with this. Maybe I’m happier with a virtual destructor 🙂 For this reason this article is not about “when Rule of Zero & polymorphic deletion makes sense”, but instead it’s just about a detail the Overload’s article missed: “why shared_ptr works that way and why unique_ptr does not”. I think this detail is important and can be useful to know in many situations.

  7. Casting from void* won’t work for multiple inheritance.

    class B : A1, A2 {};
    unique_ptr<A2, function ptr { new B{}, concrete_deleter{} };

    • Marco Arena says:

      Right, a possible solution is to create a concrete_deleter depending on two template parameters, one for the base, the other for the concrete type. I don’t like it but it works and the size of unique_ptr is preserved:

      template<typename Base, typename Concrete>
      struct concrete_deleter
      {
      	void operator()(Base* b)
      	{
      		delete static_cast<Concrete*>(b);
      	}
      };
      
      unique_ptr<A2, concrete_deleter<A2, B>> sp = unique_ptr<B, concrete_deleter<A2, B>>(new B());
      

      This does not allow you to use unique_ptr in its natural way. For example, the following can’t be emulated:

      class ISomething {};
      class Concrete1 : public ISomething{};
      class Concrete2 : public ISomething{};
      ...
      
      unique_ptr<ISomething> create() 
      {
         ...
      }
      
      

      I forgot to mention: another way is to change a bit the void* solution:

      template<typename Base>
      struct concrete_deleter
      {
      	using del_t = void(*)(void*);
      	del_t del_;
      
      	template <typename T>
      	static void delete_it(Base *p)
      	{
      		delete static_cast<T*>(p);
      	}
      
      	template<typename T>
      	concrete_deleter(T*)
      		: del_(&delete_it<T>)
      	{}
      
      	void operator()(Base* ptr) const
      	{
      		(*del_)(ptr);
      	}
      };
      
      unique_ptr<A2, concrete_deleter<A2>> ptr = ...
      

      This time we can do factories like the previous one.

  8. Juan Alday says:

    Hi Marco,

    Thanks for a great post. Indeed my article covers explicitely the use of shared_ptr as means of enforcing the Rule of Zero.
    Not covering the use of a std::unique_ptr was a decision I made after realizing it would probably require a whole article on its own (obvious now seeing how much interest your post has generated), so I took it off my original draft and decided to ellaborate on it on a following article.

    Having said that, I follow Peter Sommerlad’s advice of using shared_ptr only. While heavier than a unique_ptr (which is supposed to simply provide ownership semantics over a raw pointer) it fits perfectly with the rule of zero for dealing with non-virtual destructors, and it is much closer to other OO languages’ references with garbage collection.

    Best regards, and one again congrats on a great post

    • Marco Arena says:

      Hi Juan,
      thanks for reading and commenting. I found your article very interesting and I really appreciated how it sums up all the points around the Rule of Zero.

      I agree on using the shared_ptr only. But this is not always possible.

      For this reason, a juicy discussion could be about the tradeoff of using the Rule of Zero at any cost (e.g. by using a unique_ptr with a shared_ptr-like deleter, or an intrusive approach, …) or choosing the Rule of Five.

      Kind regards, and thanks again.

  9. […] [Arena] Ponder the use of unique_ptr to enforce the Rule of Zero, https://marcoarena.wordpress.com/2014/04/12&#8230; […]

  10. Gabor Fekete says:

    Hi Marco,

    Nice post!
    I also read the ACCU article (“Enforcing the Rule of Zero”) and the Listing 4 was kind of shocking. So, I set out to experiment a bit and came up with this:

    http://feherenfekete.wordpress.com/2014/04/24/shared_ptr-polymorphic-magic-pitfall/

    So, I consider relying on shared_ptr’s “polymorphic magic” dangerous. I wonder if this “pitfall” will ever change/go away.

  11. Marco Arena says:

    Hi Gabor, thanks for reading. I think the Rule of Zero “at any cost” can be dangerous and your article nicely describes another evil scenario. Thanks for the link!

  12. scottmeyers says:

    You seem to imply that in C++14, declaring a destructor prevents compilers from generating copy operations. That is not the case, per 12.8/7 and 12.8/18. As far as I know, there is no difference between C++11 and C++14 as regards the conditions under which copy and move operations are generated. (I’ve written to Juan Alday about this, too, as he also seems to think that C++11 and C++14 differ in this area, which I don’t believe is the case.)

    • Marco Arena says:

      Scott, you’re right. Declaring a destructor just inhibits move operations, copy is deprecated (and can be fully disabled in a future standard, but this is not the case in C++14). Thanks for pointing this out.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s