The last few weeks were positively demanding for me…

At the end of October we organized the C++ Day, an event entirely dedicated to C++, made by the Italian C++ Community that I lead and coordinate. If you feel like reading some lines about the event, have a look here. It was a great day!

C++ Day

C++ Day 2016 ~ Florence

Some days after, I left for Seattle, to attend the Microsoft MVP Summit at the Microsoft Campus in Redmond. An awesome experience!

14947822_10154149337022149_3563183436919340327_n

Italian Microsoft MVPs at the MVP Global Summit 2016

Casually, the ISO C++ Standard meeting was happening exactly the same week I was in Redmond. I couldn’t miss it! Then, at the end of the Summit, a few other MVPs (like Marius Bancila and Raffaele Rialdi) and I went to Issaquah to attend the meeting half a day. The game was afoot.

The experience was really amazing. First of all we added our names on the attendance sheet to get our names immortalized in the minutes  :) That’s was suggested by Herb, who was very kind with all of us.

issaquah-sutter

From left: Marco Arena, Herb Sutter, Raffaele Rialdi, Marius Bancila

Then I am glad we met many members of the committee. They were really gentle and welcoming.

I think attending one of such meetings is a must for whoever cares about the C++ language and also wants to understand how things are discussed and evolved.

You probably know that the committee is divided into a few working and study groups – WG and SG. The working groups are Evolution, Library Evolution, Library, and Core. We were sitting in the Evolution Working Group (EWG), where we heard discussions about a few proposals for C++17 and C++20.

A proposal presentation starts with the author(s) of a proposal defending the idea, going through the paper and showing examples. For what I attended, that part was quick (10′) and other people eventually interrupted only for little clarifications.

Then the discussion starts. It is coordinated by a guy who goes around the room and moderates the discussion. Each member who wants to say something just raises her hand and politely waits to take the floor. Too many times I attend meetings where people just interrupt others. That was exactly the opposite!

Speaking with some guys of the committee, I discovered that some (crucial) discussions are instead plenary (they involve the whole committee and not only a certain working group) and they take place in another – bigger – room.

The discussions I was present in ended with a poll. Something like “how many people agree? How many disagree? How many strongly disagree? Etc.”. It also happened that a discussion was simply postponed because the co-author – Bjarne Stroustrup – was not there.

Each proposal is deeply inspected by bringing out lots of detail, counter examples and observations. That part was the most instructive for me.

On that point, I realized that one thing is particularly vital for the committee: heterogeneity. People have a different background/interests and they use C++ in different ways. The details that come out from discussions reflect this heterogeneity. Without that we would lose many details and observations.

For example, at some point Peter Sommerlad took the floor and asked something like “so, if we accept this proposal we should start teaching people to stop doing X”. Peter made that observation because he is a professor and his point of view is often influenced by his main job.

Other examples were concerns about legacy and old code, which a certain proposal could break under some circumstances that a few people were working on daily. Interesting also the observations made by compiler implementers, because they often already see how complicated would be to code a certain new C++ feature.

The experience was definitely worth it. Thanks to Herb and Andrew Pardoe for their hospitality.

Sometimes such meetings take place in Europe so if you cannot go to USA then just wait for one happening on this side of the World and attend! I’ll do it again.

The week next I went to Berlin for Meeting C++ 2016. I am happy I was part of the staff. There I had the opportunity to meet Bjarne Stroustrup and to dine with him and  with other special people.

with-bjarne

From Left: Michele Adduci, Valentino Picotti, Marco Arena, Bjarne Stroustrup, Jens Weller, Gian Lorenzo Meocci, Marco Foco

This amazing experience concludes my “C++ & friends weeks”.

My short-term plans constist mostly in blogging – I want to write a new “C++ in Competitive Programming” installment, as well as a couple of posts I have had in mind for months – and planning next events and activities. In 2017 my spare time won’t be more than in 2016 but I hope to be more active.

In June I want to make a new C++ event in Italy. If you feel like supporting/sponsoring/helping please get in touch with me.

Crafting software is about balancing competing trade-offs. It’s impossible to optimize every factor of a system, as speed, usability, accuracy, etc at the same time. Moreover, solutions of today impact decisions and solutions of tomorrow. On the other hand, in Competitive Programming, the best solution is one that just makes each test-case pass.

However, since each challenge has specific requirements and constraints, we can still imagine it as a small and simplified component of a bigger system. Indeed, a challenge specifies:

  • the input format and the constraints of the problem;
  • the expected output;
  • the environment constraints (e.g. the solution should take less than 1s);
  • [optional] amount of time available to solve the challenge (e.g. 1 hour).

Questions like: “can I use a quadratic solution?” or “may I use extra space?” will be answered only by taking into account all the problem information. Balancing competing trade-offs is a key in Competitive Programming too.

This post is a bit philosphical, it’s about what I consider the essence of Competitive Programming and why a professional could benefit from it. Since February of this year I’ve been monthly organizing “coding dojos” on these themes, here in Italy. After some months, I’m seeing good results in people regularly joining my dojos and practicing at home. It’s likely that in the future I’ll dedicate a post about how dojos are organized, but if you want more information now, please get in touch.

Follow me through this discussion and I’ll explain you some apparently hidden values of Competitive Programming and why I think they are precious for everyone.

Let me start by stating that the only real compromise in Competitive Programming is one that makes an acceptable solution. However, we distinguish two distinct “coding modes”: competition and practice. Competition is about being both fast and accurate at the same time. You just have to correctly solve all the challenges as fast as possible. On the other hand, practice mode is the opportunity to take time and think more in deep about compromises, different approaches and paradigms, scalability, etc. In practice mode we can also explore the usage of standard algorithms. A story about that concept: at one of my dojo I proposed this warm-up exercise:

Given two arrays of numbers A and B, for i from 0 up to N-1, count how many times A[i] > B[i].

For example:

A = [10, 2, 4, 6, 1]
B = [3, 2, 3, 10, 0]

The answer is 3 (10>3, 4>3, 1>0).

This exercise was trivial and people just solved it by looping and counting.

During the retrospective I asked people: “can you solve this problem without looping explicitly?”. I lowered my aim: “Suppose I provide you a function which processes the arrays pair-wise”. One guy plucked up courage and replied ” like ZIP?”. I said “Yes!”. I dashed off the blackboard and showed the idea.

The C++ algorithm to use in this case was std::inner_product, that we already met in the very first post of this series. I’ll get back to inner_product again in the future, meanwhile here is the slick solution to this problem:

As we already saw, inner_product is kind of a zip_and_reduce function. Each pair is processed by the second function (greater) and results are sequentially accumulated by using the first one (plus). People realized the value of applying a known pattern to the problem. We discussed also about how easy it is to change the processing functions, getting different behaviors by maintaining the same core algorithm.

In this post I’ll show different scenarios by turning on “practice mode” and “competition mode” separately, discussing the different ways to approach problems and care about compromises in both.

Consider the problem we tackled in the previous post: the “most common word”. Since we used std::map, the solution was generic enough to work with types other than strings with a little effort. For example, given an array of integers, we can find the mode (as statisticians call it) of the sequence.

Imagine this solution still fits the requirements of the challenge.

In “competition mode”, we stay with this solution, just because it works. The pragmatic competitive coder was lucky because she already had a solution on hand, so she just exults and lands to the next challenge of the contest.

Now, let’s switch “practice mode” on. First of all, we notice two interesting constraints of the challenge: 1) the elements of the array belong to the range 0-1’000; 2) the size of the sequence is within 100’000. Often, constraints hide shortcuts and simplifications.

Ranting about life in general and not only about programming, I use to say “constraints hide opportunities”.

In Competitive Programming this means that whenever we see particular sentences like “assume no duplicates exist” or “no negative values will be given”, it’s very likely that these information should be used somehow.

Sometimes “very likely” becomes a command and we won’t have other options to solve the challenge:  we have to craft a solution which exploits the constraints of the problem. Such challenges are interesting since they require us to find a way to cut off the space of all the possible solutions.

Anyway, back to the mode problem, since we know the domain is 0-1’000, we can just create a histogram (or frequency table) and literally count the occurrences of each number:

Although the solution is similar to the previous one (because std::map provides operator[]), this is dramatically faster because of contiguity.

However, we agreed to a compromise: we pay everytime a fixed amount of space to allocate the histrogram (~4 KB) and we support only a fixed domain of ints (the ones within 0-1’000). That’s ok for this challenge, but it could not be the case in other cases. Balancing speed, speed and genericity is something to think about in practice mode. In my coding dojos in Italy we use to discuss about such things at the end of each challenge, during a 15-min retrospective.

At this point we need to answer a related question: “are we always permitted to allocate that extra-space?”. Well, it depends. Speaking about stack allocations – or allocations known “at compile-time” – compilers have their own limits (that can be tuned), generally in order of a few MB.

The story is different for dynamic allocations. On websites like HackerRank we may have a look at the environment page which contains the limits for each language. For C++, we are usually allowed to allocate up to 512 MB. Other websites can limit that amount of space, typically per challenge. In these situations, we have to be creative and find smarter solutions.

Take InterviewBit as an example – a website focused on programming interviews. A challenge states: “given a read only array of n + 1 integers between 1 and n, find one number that repeats, in linear time using constant space and traversing the stream sequentially”. It’s clear that we cannot use an auxiliary data structure, so forget maps or arrays. A solution consists in applying a “fast-slow pointer strategy” – so does Floyd’s algorithm. I won’t show the solution because very soon we’ll have a guest post by Davide Di Gennaro who will show how to apply Floyd to solve another – more realistic – problem.

Space and time have a special connection. The rule of thumb is that we can set up faster solutions at the price of using more space, and viceversa. It’s another compromise. Sorting algorithms are an example: some people amaze when they discover that, under certain conditions, we can sort arrays in linear time. For example, you know that counting sort can perform very well when we have to sort very long sequences which contains values in a small range – like a long string of lowercase alphabetic letters.

Counting sort allocates an extra frequecy table (a histogram, as we have just seen), containing the frequencies of the values in the range. It exhibits linear time and space complexity that is O(N+K), where N is the length of the array to sort and K is the length of the domain. However, what if the elements go from 1 to N^2? Counting sort exhibits now quadratic complexity! If we need to care about this problem, we can try adoperating another algorithm, like Radix sort, which basically sorts data with integer keys by grouping keys by the individual digits which share the same significant position and value.

Radix sort has also other limitations, but I won’t go into details because it’s not the point of this article. Again, I just want to highlight that other decisions have to be taken, depending on the requirements and the constraints of the problem. A general rule is that often we can exchange space for making faster solutions. Space and time go hand in hand.

Time is constrained as well: typically 1 or 2 seconds on HackerRank. To give you some feeling about what it means, let’s write down some time evidences measured on HackerRank’s environment.

std::accumulate – O(n) – on vector<int>:

100’000 elements: 65 microseconds

1’000’000 elements: 678 microseconds

10’000’000 elements: 6973 microseconds (6.973 milliseconds)

However, don’t be swayed by complexity. Time is also affected by other factors. For example, let’s try accumulating on std::set:

100’000 elements: 747 microseconds

1’000’000 elements: 16063 microseconds (16.063 milliseconds)

10’000’000 elements: timeout (>2 seconds)

You see that contiguity and locality make a huge difference.

Imagine now what happens if we passed from N to N^2. Roughly speaking, we can just square the time needed to accumulate a vector of 100’000 elements to get an approximation of what would happen: we obtain ~4.5 milliseconds. If the challenge expects at most 1 second, we are able to afford it.

Should we submit such a quadratic solution?

Again, we may in competition mode. Instead we should think about it in practice mode. What if the input grows? Moreover, special contests run extra tests once a day, generally stressing the solution by pushing inputs to the limit. In this case, we need to understand very carefully our constraints and limits.

Compromises also originate from the format of input. For example, if we are given a sorted sequence of elements, we are enabled to binary-search that sequence for free (that is, we don’t need to sort the sequence ourselves). We can code even more optimized searches if the sequence has also some special properties. For instance, if the sequence represents a time series containing samples of a 10-sec acquisition, at 100 Hz, we know precisely – and in constant time – where the sample of the 3rd second is. It’s like accessing an array by index.

The more we specialize (and optimize) a solution, the more we lose genericity. It’s just another compromise, another subject to discuss.

Compromises are also about types. In previous posts we have faced with problems using integer values and we have assumed that int was fine. It’s not always the case. For example, many times problems have 32bit integer inputs and wider outputs – which overflow 32bit. Sometimes the results do not even fit 64bit ints. Since we don’t have “big integers” in C++, we have to design such facility ourselves or…switch to another language. It’s not a joke: in “competition mode”, we just use another language if possible. It’s quicker.

Instead, in “practice mode” we can design/take from an external source a big integer class, for being reused in “competition mode”. This aspect is particularly important: when I asked to some top coders their thoughts on how to constantly improve and do better, their shared advice was to solve problems and refine a support library. Indeed, top levels competitive coders have their snippets on hand during contests.

This is a simplistic approximation to what we actually do as Software Engineers: we solve problems first of all by trying to directly use existing/tested/experienced solutions (e.g. libraries, system components). If it’s not the case, we have two options: trying to adapt an existing solution to embrace the new scenario, or writing a new solution (“writing” may mean also “introducing a new library”, “using some new system”, etc). Afterwards, we can take some time to understand if it’s possible to merge this solution to the others we already have, maybe with some work (we generally call it refactoring). You know, the process is more complicated, but the core is that.

Using or not an existing solution is a compromise. Mostly because the adapting and refactoring parts require time and resources.

I think everyone would love writing the most efficient solution that is also the simplest. Generally it’s not the case.

However it happens in many challenges – especially in the ones of difficulty Easy/Medium – that “average solutions” are accepted as well (these solutions work for the challenge, even if they are not the “best” – in terms of speed, space, etc).

The balance between the difficulty of a solution and its effectiveness is another compromise to take into account. About that, I tell you a story I lived a few days ago at my monthly coding dojo. We had this exercise:

Given a string S, consisting of N lowercase English letters, reduce S as much as possible by deleting any pair of adjacent letters with the same value. N is at most 100.

An example:

aabccbdea

becomes “dea”, since we can first delete “aa”getting:

bccbdea

Then we delete “cc” and get: “bbdea”, finally we remove “bb” and get “dea”.

The attendees proposed a few solutions, none of which exhibited linear time and constant space complexity. Such a solution exists, but it’s a bit complicated. I proposed that challenge because I wanted people to reason about the compromises and, indeed, I made one point very clear: the maximum size of the string is 100. It’s little.

We had two leading solutions: one quadratic and one linear in both time and space. The quadratic one could be written in a few lines by using adjacent_find:

The other one adoperated a deque (actually, I have seen such solution applied many times in other contexts):

Both passed. Afterwards, we discussed about the compromises of these solutions and we understood limits of both. In a more advanced dojo, I’ll propose to solve this challenge in linear time and constant space!

Competitive Programming gives us the opportunity to see with our own eyes the impact of self-contained pieces of code, first of all in terms of time spent and space allocated. As programmers, we are forced to do predictions and estimatations.

Estimating and predicting time/space is fundamental to decide quickly which solution is the most viable when different compromises cross our path. Back to the mode example, using std::map or std::vector makes a huge difference. However, when the domain is small we are allowed just not to care about that difference. Somehow, this happens also in ordinary software development. For instance, how many times do we use streams in production code? Sometimes? Never? Often? I think the most fair answer would be “it depends”. Streams are high-level abstractions, so they should be relatively easy to use. However, they are generally slower than C functions. On the other hand, C functions are low-level facilities, more error prone and less flexible. What to do entirely depends on our needs and possibilities, and very often also on the attitude of us and our company.

Although Competitive Programming offers a simplified and approximated reality, facing a problem “as-is” it’s an opportunity to understand compromises, estimate time and space, adapt known solutions/patterns to new scenarios, learn something completely new, think outside the box.

The latter point is related to another important fact that I see with my own eyes during my coding dojos in Italy: young non-professional people often find clever solutions quicker than experienced programmers. The reason is complex and I have discussed about that also with psychologists. To put it in simple terms, this happens because young people generally have a “weaker mindset”, have less experience and are not afraid of failure. The latter point is important as well: many professionals are terribly afraid of failures and of making a bad impression. Thus, an experienced solution that worked seems more reliable than something completely new. That’s basically a human behavior. The problem comes up when the experienced programmer cannot think outside the box and is not able to find new and creative ways to solve a problem.

Competitive Programming is a quick and self-contained way to practice pushing our “coding mind” to the limits. As a Software Engineer, programmer and professional, I find it precious.

Competitive Programming offers both a “practice mode”, during which I can “stop and think”, reasoning about compromises, time, space, etc. And a “competition mode”, where I can “stress-test” my skills and experience.

Learning new things is often mandatory. This needs time, will and patience. But you know, it’s a compromise…

Recap for the pragmatic C++ competitive coder:

  • Compromises in Competitive Programming have different shapes:
    • Dimension and format of the input;
    • Time and space limits;
    • Data types;
    • Adaptability of a well-known solution;
    • Simplicity of the solution.
  • The essence of Competitive Programming consists in two phases:
    • Competition: the solution has just to work;
      • Top coders generally take on hand snippets and functions to use during contests.
    • Practice: deeply understanding compromises, variations and different implementations of the solution;
      • Top coders generally refine their snippets and functions.
  • A challenge may have simplifications and shortcuts:
    • The more we use those, the less generic our solution will be;
    • Hopefully, the more we use those, the more optimized (and/or simple) our solution will be;
    • Many challenges require us to find shortcuts in the problem constraints and description.

This post concludes my introduction to C++ containers. We’ll meet other standard data structures such as lists, queues and heaps when needed along the way.

Some posts ago, I anticipated that understanding containers is crucial for adoperating standard algorithms effectively. In a few words, the reason is that each container has some special features or it’s particularly indicated for some scenarios. On the other hand, algorithms work only in terms of iterators, which completely hide this fact. That’s great for writing generalized code, but it also merits attention because for exploiting a particular property of a container, you generally have to choose the best option yourself.

The only “specialization” that algorithms (may) do is in terms of iterators. Iterators are grouped in categories, which basically distinguish how iterators can be moved. For instance, consider std::advance that moves an iterator by N positions. On random-access iterators, std::advance just adds an offset (e.g. it += N), that is a constant operation. On the other hand, on forward iterators (basically they can advance only one step at a time) std::advance is obliged to call operator++ N times, a linear operation.

Choosing – at compile time – different implementations depending on the nature of the iterators is a typical C++ idiom which works well in many situations (this technique is an application of tag dispatching, a classical C++ metaprogramming idiom). However, to exploit the (internal) characteristics of a container, we have to know how the container works, which (particular) member functions it provides and the differences between using the generic standard algorithm X and the specialized member function X.

As an example, I mentioned std::find and std::map::find. What’s the difference between them? Shortly, std::find is an idiomatic linear search over a range of elements. It basically goes on until either the target value or the end of the range is reached. Instead, std::map::find…Wait, I am not a spoiler! As usual, let me start this discussion through a challenge:

Some days ago I gave my birthday party  and I invited some friends. I want to know which name is the most common among my friends. Or, given a sequence of words, I want to know which one occurs the most.

In this trivial exercise we need a way to count occurrences of words. For example:

matteo riccardo james guido matteo luca gerri paolo riccardo matteo

matteo occurs three times, riccardo twice, the others once. We print matteo.

Imagine to count the words by incrementing a counter for each of them. Incrementing a counter should be a fast operation. Finally, we’ll just print the string corresponding to the greatest counter.

The most common data structures to do this task is generally known as associative array: basically, a collection of unique elements – for some definition of “uniqueness”, which – at least – provides fast lookup time. The most common type of associative container is a map (or dictionary): a collection of key-value pairs, such that each possible key appears just once. The name “map” resembles the concept of function in mathematics: a relation between a domain (keys) and a codomain (values), where each element of the domain is related (mapped) to exactly one element of the codomain.

Designing maps is a classic problem of Computer Science, because inserting, removing and looking up these correspondences should be fast. Associative containers are designed to be especially efficient in accessing its elements by key, as opposed to sequence containers which are more efficient in accessing elements by position. The most straightforward and elementary associative container you can imagine is actually an array, where keys coincide with indexes. Suppose we want to count the most frequent character of a string of lowercase letters:

freq contains the frequencies of each lowercase letters (0 is a, 1 is b, and so on). freq[c – ‘a’] results in the distance between the char c and the first letter of the alphabet (‘a’), so is the corresponding index in the freq array (we already saw this idiom in the previous post). To get the most frequent char we just retrieve the iterator (a pointer, here) to the element with highest frequency (std::max_element returns such iterator), then we calculate the distance from the beginning of freq and finally we transform this index back to the corresponding letter.

Note that in this case lookup costs O(1). Although an array shows many limitations (e.g. cannot be enlarged, keys are just numbers lying in a certain range), we’ll see later in this series that (not only) in competitive programming these “frequency tables” are extremely precious.

A plain array does not help with our challenge: how to map instances of std::string?

In Computer Science many approaches to the “dictionary problem” exist, but the most common fall into a couple of implementations: hashing and sorting. With hashing, the idea is to – roughly speaking – map keys to integral values that index a table (array). The trio “insert, lookup, remove” has average constant time, and linear in the worst case. Clearly this depends on several factors, but explaining hash tables is beyond the target of this post.

The other common implementation keeps elements in order to exploit binary search for locating an element in logarithmic time. Often trees (commonly self-balancing binary search trees) are employed to maintain this ordering relation among the elements and also for having logarithmic performance on insert, lookup and removal operations.

The C++ STL provides both the hash-based (from C++11) and the sort-based implementations, providing also variants for non-unique keys (multi). From now I’ll refer to sort-based implementation as tree-based because this is the data structure used by the major C++ standard library implementers.

There is more: STL provides two kinds of associative containers: maps and sets. A map implements a dictionary – collection of key-value pairs. A set is a container with unique keys. We’ll discover that they provide pretty much the same operations and that under the hood they share a common implementation (clearly, either hash-based or tree-based). Also, a hash container and a tree container can be used almost interchangeably (from an interface point of view). For this reason, I’ll focus on the most used associative container: a tree-based map. We’ll discuss later about some general differences.

Please, give a warm welcome to the most famous C++ associative container: std::map. It’s a sorted associative container that contains key-value pairs with unique keys. Here is a list of useful facts about std::map:

  • Search, removal, and insertion operations have logarithmic time complexity;
  • elements are kept in order, according to a customizable comparator – part of the type and specified as a template argument (std::less by default – actually the type is different since C++17, read on for more details);
  • iterators are bidirectional (pay attention that increments/decrements by 1 are “amortized” constant, whereas by N are linear);
  • each map element is an instance of std::pair<const Key, Value>.

The latter point means that we are not permitted to change keys (because it would imply reordering). Eventually, you can get the entry, remove it from the map, update the key, and then reinsert it.

Ordered associative containers use only a single comparison function, that establishes the concept of equivalence: Equivalence is based on the relative ordering of object values in a sorted range.
Two objects have equivalent values with respect to the sort order used by an associative container c if neither precedes the other in c’s sort order:

In the general case, the comparison function for an associative container isn’t operator< or even std::less, it’s a user-defined predicate (available through std::key_comp member function).

An important observation: in C++, every time you have to provide a “less” comparison function, the standard assumes you implement a strict weak ordering.

Let’s use std::map to solve the challenge:

How it works: as far as we read a string we increment the corresponding counter by 1. map::operator[] returns a reference to the value that is mapped to a key that is equivalent to a given key, performing an insertion if such key does not already exist. At the end of the loop, freq is basically a histogram of words: each word is associated with the number of times it occurs. Then we just need to iterate on the histogram to figure out which word occurs the most. We use std::max_element: a one-pass standard algorithm that returns the greatest element of a range, according to some comparison function (that is std::less be default, a standard function object which – unless specialized – invokes operator< on the objects to compare).

Given that map entries are pairs, we don’t use pair’s operator< because it compares lexicographically (it compares the first elements and only if they are equivalent, compares the second elements). For instance:

"marco", 1
"andrea", 5

according to pair::operator<, “marco” is greater than “andrea” then it will result the max_element. Instead, we have to consider only the second value of the pairs. Thus we use:

If your compiler does not support generic lambdas (auto parameters), explicitly declare const pair<const string, int>&. const string is not fussiness: if you only type string, you get an extra subtle copy that converts from pair<const string, int> to pair<string, int>. Bear in mind that entries of map are pair<const K, V>.

Suppose now we have an extra requirement: if two or more names occur the most, print the lexicographically least. Got it?

matteo riccardo matteo luca riccardo

In this case, “matteo” and “riccardo” occur twice, but we print “matteo” because lexicographically lower than “riccardo”.

How to accommodate our solution to fit this extra requirement? There’s an interesting effect of using a sorted container: when we forward iterate over the elements, we go from lexicographically lower strings to lexicographically greater strings. This property combined with max_element automatically supports this extra requirement. In fact, using max_element, if more than one element in the range is equivalent to the greatest element, it returns the iterator to the first such element. Since the first such element is (lexicographically) the lowest, we already fullfill the new requirement!

Guess if we want to print the lexicographically greatest string…it’s just the matter of iterating backward! Having clear in mind these properties is a great thing. In this series we’ll meet many others.

Let’s continue our journey through std::map. Suppose that part of another challenge is to make our own contacts application. We are given some operations to perform. Two of them consist in adding and finding. For the add operation, we will have to add a new contact if it does not exist, or to update it otherwise. For the find operation, we will have to print the number of contacts who have a name starting with that partial name. For example, suppose our list contains:

marco, matteo, luca

find(“ma”) will result in 2 (marco and matteo).

The best data structure for this kind of task is probably a trie, but the pragmatic competitive programmer knows that in several cases std::map suffices. We can take advantage of the fact that a map keeps things in order. The challenge is also an excuse to show you how to insert into a std::map, since there are different ways to achieve this task.

We have two cases:

  1. insert/update an entry
  2. count names starting with some prefix

Our map looks like:

map<string, string> contacts; // let's suppose the contact number to be a string as well

The first one has been discussed a lot in many books and blogs, so much that C++17 contains an idiomatic function insert_or_assign. In a few words, to efficiently insert or assign into a map, we first look for the contact in the structure and in case of a match we update the corresponding entry; otherwise we insert it.

This is the simplest way to do that:

contacts[toInsertName] = toInsertNumber;

You may ask: why in C++17 do we bother with a specific function for this one-liner? Because we are C++. Because that one-liner is succinct, but it hides a subtle cost when the entry is not in the map: default construction + assignment.

As we have seen, contacts[toInsertName] performs a lookup and returns either the corresponding entry or a brand-new one. In the latter case a default construction happens. Then, = toInsertNumber does an assignment into the just created string. Is that expensive? Maybe not in this case, but it may be in general, and this kind of things matters in C++.

Here is more enlightening example: suppose we have a cache implemented with std::map:

You don’t want to update anything if key is already there. Rather, you first look for the value corresponding to key and only if it’s not there you invoke the lambda to calculate it. Can you solve it by using operator[]? Maybe (it depends on the value type), but it’s not effective nor even efficient. Often std::map novices come up with this code:

map::find locates the element with key equivalent to a certain key, or map::end if such element does not exist. map::emplace inserts a new element into the container constructed in-place with the given args if there is no element with the key in the container. emplace returns a pair<iterator, bool> consisting of an iterator to the inserted element, or the already-existing element if no insertion happened, and a bool denoting whether the insertion took place. True for “insertion”, false for “no insertion”.

This code implements what I call the infamous double-lookup anti-pattern. Basically, both find and emplace search the map. It would be better to – somehow – take advantage of the result of the first lookup to eventually insert a new entry into the map. Is that possible?

Sure.

This is the idea: if the key we look for is not in the map, the position it should be inserted in is exactly the position where the first element that does not compare less than the key is located. Let me explain. Consider this sorted list of numbers:

1 2 3 5 6 7 8

If we want to insert 4, where should it be inserted? At the position of the first number that does not compare less than 4. In other words, the first element greater or equal to 4. That is 5.

This is nothing more than what mathematics defines as lower boundstd::map provides lower_bound, and for consistency the STL defines std::lower_bound to perform a similar operation on sorted ranges. As a generic algorithm, lower_bound is a binary search.

Here is what the idiom looks like:

Since lower_bound returns the first element that does not compare less than key, it can be key itself or not. The former case is handled by the right hand side of the if condition: data.key_comp() returns the comparator used to order the elements (operator< by default). Since two equal elements do not compare less, this check has to be false. Otherwise, key is less than lb->first because lb points to one element past key (or to the end, if such element does not exist). Makes sense?

emplace_hint is like emplace, however it also takes an extra iterator to “suggest” where the new element should be constructed an placed into the tree. If correct, the hint makes the insertion O(1). map::insert has an overload taking this extra iterator too, resulting in a similar behavior (but remember that insert takes an already built pair).

A bit of simplification for pragmatic competitive programmers is when you do not use custom comparators: generally you may use operator== for checking equality:

if (lb != end(data) && key==lb->first)

Ha, C++17 has this insert_or_assign that should search the map and use Value’s operator= in case the entry has to be updated, or insert it otherwise (move semantics is handled as well). There is also another reason why insert_or_assign is important, but I’ll spend a few words about that leater, when unordered_map will be introduced.

Since I introduced lower_bound, I must say there is also an upper_bound: it locates the first element which compares greater than the searched one. For example:

1 2 3 4 5

upper_bound(3) locates 4 (position 3). What’s the point? Let’s turn our list into:

1 2 2 2 3 4

lower_bound(2) locates…2 (position 1), whereas upper_bound(2) results in position 4 (element 3). Combining lower_bound(2) with upper_bound(2) we find the range of elements equivalent to 2! Range is in C++ speaking (upper_bound(2) is one-past the last 2). This is extremely useful in multimap and multiset and indeed a function called equal_range which returns the combination of lower and upper bounds exists. equal_range is provided by all the associative containers (in unordered_* is only for interface interchangeability reason) and by the STL as an algorithm on sorted sequences – std::equal_range.

We’ll see applications of these algorithms in the future.

So, what about our challenge? Suppose it’s ok to use operator[] for inserting/updating elements – in this case string constructor + operator= are not a problem. We need to count how many elements start with a certain prefix. Is that easy? Sure, we have a sorted container, remember? Listen my idea: If we call lower_bound(P) we get either end, the first element equal to P or …suspense… the first element starting with P. Since lower_bound returns the position to the first element which does not compare less than P, the first element that looks like P$something is what we get if such element exists.

Now what? I’m sure you already did this step in your mind. We just iterate until we find either the end or the first element that does not start with P. From the previous post we already know how to verify if a string begins as another:

We are paying both a prefix comparison and a linear iteration from lb (write it as O(|P|*K), where |P| is the length of the prefix P, and K is the number of strings starting with P). Advanced data structures that more efficiently deal with these – possible – problems exist, but they are beyond the scope of this post. In a moment I’ll do another observation about this code.

I realized that the post is becoming longer than I imagined, so let me recap what we have met so far:

  • How to insert:
    • operator[] + operator=;
    • infamous double-lookup anti-pattern (e.g. find + insert);
    • efficient “get or insert”: lower_bound + emplace_hint/insert;
    • efficient “get or assign”/”insert or assign”: lower_bound + Value’s operator=;
    • C++17 insert_or_assign.
  • How to lookup:
    • find (aka: exact match);
    • lower_bound/upper_bound (aka: tell me more information about what I’m looking for);
    • operator[] (aka: give me always one instance – eventually new, if can be default-constructed);
    • bonus: at (aka: throw an exception if the element is not there);
    • bonus: count (aka: how many elements equivalent to K exist? 0 or 1 on non-multi containers).
  • Taking advantage of sorting. For instance:
    • we combined max_element’s “stability” – hold the first max found – with map’s order to get the max element that is also the lexicographically least (or the greatest, by iterating backward);
    • we learnt how to locate and iterate on elements which start with a given prefix.

To erase an element, you just call map::erase(first, last), or erase(iterator), or erase(key). More interesting is how to implement erase_if, an idiom simplified by C++11 because now erase returns the iterator to the last removed element. This idiom can be applied to every associative container:

 

Design choices

 

So, we still have an open question, right? What’s the difference between std::find and map::find?

You know, std::find is a linear search on a range. Now, I hope you understood that map::find is a logarithmic search and it uses a notion of equivalence, instead of equality to search elements.

Actually, there is a bit more.

Let’s raise the bar: what’s the difference between std::lower_bound and map::lower_bound? First of all: is it possible to use std::lower_bound with map::iterator? std::lower_bound requires simple forward iterators thus the answer is yes. So what?

Basically, std::lower_bound – just like all the other algorithms – works only in terms of iterators; on the other hand map::lower_bound – just like all the other map’s algorithms – exploits map’s internal details, performing better. For example, std::lower_bound uses std::advance to move iterators and you know that advancing a map::iterator results in linear time. Instead, map::lower_bound does a tree traversal (an internal detail), avoiding such overhead.

Although exceptions exist, the rule of thumb is: if a container provides its own version of a standard algorithm, don’t use the standard one.

I tell you a story about this rule. Remember that the comparator is part of the static interface of an associative container (it’s part of the type), unlike what happens in other languages like C# where the comparator is decoupled from the type and can be dynamically injected:

Dictionary<int, string> dict = new Dictionary<int, strint>(StringComparer.OrdinalIgnoreCase);

Some time ago I had a discussion with a collegue about this aspect of C++: he was using a map<string, SomeValueType> to store some information, but he was using it only for case-insensitive searches by calling std::find (the linear one). That code made my hair stand on end: “why not using a case-insensitive comparator and making this choice part of the map type?” – I asked. He complained: “C++ is breaking incapsulation:  I won’t commit my map to a specific comparator. My clients mustn’t know how elements are sorted”.

At first blush I got annoyed, but after a couple of emails I understood his point (it was about the architecture of the system he designed, rather than about C++ itself). At the end of a quite long  – and certainly interesting – discussion, I come up with a solution to – more or less – save both ways: I introduced a new type which inherited from std::map and allowed to inject a comparator at construction time, like in C# (using less<> by default). I don’t recommend this approach (for example, because the comparator can’t be inlined and every comparison costs a virtual call – it uses std::function under the hood), but at least we are not linearly searching the map anymore…

This story is just to say: use containers effectively. Don’t go against the language. std::map is not for linear search, as std::vector is not for push elements at front.

I’d like mentioning a fact about the cache. std::map is generally considered a cache-unfriendly container. Ok we can use allocators, but try to understand me: by default we are just pushing tree nodes into the heap, moving through indirections, etc. On the other hand, we are all generally happy with contiguous storage, like vectors or arrays, aren’t we? Is that possible to easily design cache-friendly associative containers? It is, when the most common operation is lookup. After all, what about using binary search on a sorted vector? That’s the basic idea. Libraries like boost (flat_map) provide this kind of container.

As my friend Davide Di Gennaro pointed out, given the triplet of operations (insert, lookup, erase), the best complexity you get for a general-purpose usage is O(logN, logN, logN). However, you can amortize one operation, sacrificing the others. For example, if you do many lookups but a few insertions, flat_map performs O(N, logN, N), but the middle factor is much lower (e.g. it’s cache-friendly).

Consider this example: we want to improve our algorithm to count our contact names which start with a certain prefix. This time, we use a sorted vector and std::lower_bound to find the first string starting with the prefix P. In the previous code we just iterated through the elements until a mismatch was found (a string not starting with P).

This time, we try thinking differently: say we have found the position of the first string starting with P. Call that position “lb” (lower bound). Now, it should be clear that we must find the next string not starting with P. Define this string to be the first greater than lb, provided that “greater” means “not starting with P”. At this point, do you remember which algorithm finds the first element greater than another, in a sorted sequence?

std::upper_bound.

So we can employ upper_bound, using a particular predicate, and we expect a logarithmic complexity. What will this predicate look like? Suppose we count “ma” prefixes. Strings starting with “ma” are all equivalent, aren’t they? So, we can use a predicate which compares only “ma” (P) with the current string. When the current string starts with P then it’s equivalent to P and the predicate will return false. Otherwise, it will return true. After all, starting the search from lower_bound’s result, we can get either ma$something or $different-string:

Some clarifications: the first parameter of the lambda is always the value (or a conversion) of the upper bound to search for in the range (P, in our case). This is a guarantee to remember. The lambda returns false only when the current string is not starting with P (s1, inside the lambda body). std::upper_bound will handle the rest.

Why didn’t we use this approach directly on std::map? As I said at the beginning of this section, standard algorithms works in terms of iterators. Using std::upper_bound on std::map would result in logN * N. That additional N factor is caused by advancing iterators, that is linear on map::iterators. On the other hand, sorted vector provides random access iterators and so the final cost of counting prefixes is O (|P|*logN), given that we have sacrificed insert and removal operations (at most, linear).

 

Recent additions

 

C++14 and C++17 add new powerful tools to associative containers:

  • Heterogeneous lookup: you are no longer required to pass the exact same object type as the key or element in member functions such as find and lower_bound. Instead, you can pass any type for which an overloaded operator< is defined that enables comparison to the key type. Heterogenous lookup is enabled on an opt-in basis when you specify the std::less<> or std::greater<> “diamond functor” comparator when declaring the container variable, like: map<SomeKey, SomeValue, less<>>. See here for an example. This works only for sorted associative containers.
    This feature is also kwnown by some people as “transparent comparators”, because comparators that “support” this feature have to define a type is_transparent = std::true_type. This is basically required for backwards-compatibility with existing code (see for example here for a more detailed explanation). A terrific usage of this feature is, for example, searching on a map<string, Val> by passing a const char* (no conversion to string will be performed).
  • try_emplace and insert_or_assign, as an improvement of the insertion interface for unique-keys maps (more details here).
  • Ordered By Default: Mostly for both design and ABI compatibility reasons, ordered associative containers now specify as a default compare functor the new std::default_orderer_t, (that behaves like std::less – more details here).
  • Splicing maps and sets: (following words by Herb Sutter) you will now be able to directly move internal nodes from one node-based container directly into another container of the same type (differing at most in the comparator template parameter), either one node at a time or the whole container. Why is that important? Because it guarantees no memory allocation overhead, no copying of keys or values, and even no exceptions if the container’s comparison function doesn’t throw. The mechanics are provided by new functions .extract and .move, and corresponding new .insert overloads (approved proposal here).
  • Structure bindings: we should be able to iterate on maps this way:

 

A few words about unordered_map

 

We end this long post by spending some words about unordered associative containers. I don’t show you multi* containers because they are more or less the same as the corresponding non-multi ones. Clearly, they support multiple instances of the same key and, as I said, equal_range plays an important role for lookups. I’ll probably spend more time on multi containers when needed in future challenges.

After this section we’ll see a final example using unordered_set.

As std::map does, std::unordered_map contains key-value pairs with unique keys. Unlike std::map, internally the elements are not sorted in any particular order, but organized into buckets. Which bucket an element is placed into depends entirely on the hash of its key. This allows fast access to individual elements, since once the hash is computed, it refers to the exact bucket the element is placed into. For this reason, search, insertion, and removal of elements have average constant-time complexity. Due to the nature of hash, it’s hard/impossible to know in advance how many collisions you will get with your hash function. This can add an element of unpredictability to the performance of a hash table. For this reason we use terms like “average”, “amortized” or “statistically” constant-time when referring to operations of a hash container.

This is not a discussion on hash tables, so let me introduce some C++ things:

  • STL provides a default std::hash template to calculate hash of standard types;
  • std::hash can be eventually specialized for your types (or you can specify your own functor);
  • when a collision happens, an “equality” functor is used to determine if two elements in the same bucket are different (std::equal_to by default);
  • it’s possible to iterate through the elements of a bucket;
  • some hash-specific functions are provided (like load_factor, rehash and reserve);
  • unordered_map provides almost all the the functions of std::map.

The latter point simplify our life to interchange std::map with std::unordered_map. There are two important things to say about this fact: 1) lower_bound and upper_bound are not provided, however equal_range is; 2) passing hints to unordered_ containers insertion functions is not really useful – actually it is to make the insertion “exit quickly”.

We know that on ordered associative containers, conditional insertion with lower_bound is the best way of performing an “insert or update”. So what? How can we say that ordered/unordered containers are more or less interchangeable if we miss lower_bound/upper_bound? We may apply equal_range:

This idiom is equivalent to the one using lower_bound (both semantically and from a performance point of view), plus it works on unordered maps.

Note that in C++17, try_emplace and insert_or_assign will dramatically improve the usability of unordered associative containers that will efficiently handle the case when we need to first perform a lookup and eventually insert a new element if that element is not present (first of all, the hash value won’t be recalculated). That’s the real value of such additions: not only using insert_or_assign is more efficient, but also clearer and truly interchangeable.

 

Tree or Hash?

 

There are some general rules/facts to take into account when in doubt between tree-based or hash-based containers. They are general, this means – as always – that when really in doubt you can start with one, profile your code, change to the other (again, thanks to interface compatibility), profile again, and compare.

By the way, here are some facts for the pragmatic competitive coder:

  • on average, hash-based lookup is faster;
  • generally, hash-based containers occupy more memory (e.g. big array) than tree-based ones (e.g. nodes and pointers);
  • tree-based containers keep elements in order, is that feature important? (e.g. prefix lookups, get top elements, etc);

 

An application of unordered_set

 

Since I introduced std::unordered_map…let’s do a challenge involving unordered_set🙂 Jokes apart, this long post hosted mostly maps, I’d like concluding with set, a really helpful and minimal associative container that we will meet again in the future.

That’s the challenge:

Given N unique integers, count the number of pairs of integers whose difference is K.

For example, for this sequence:

1 5 3 4 2

With K = 2, the answer is 3 (we have three pairs whose difference is 2: 4-2, 3-1, 5-3).

The trivial solution to this problem has quadratic time complexity: we enumerate all the pairs and we increment a counter when the difference is K.

The way to tackle this problem is to convert the problem space from one in which we consider pairs to a search for a single value. The i-th element contributes to the final count only if A + K is in the sequence. For instance:

1 5 3 4 2

With K = 2. 1 contributes because 1 + 2 = 3 is in the list. Likewise, 3 is fine because 3 + 2 = 5 is in the list. And the same for 2, because we spot 2 + 2 = 4.

We can then store the input into an unordered_set (on average, constant time lookup), iterate on the elements and for each value A search for A + K:

Some sugar: std::count_if makes it clear that we are counting how many elements satisfy a predicate. Our predicate is true when currElem + K exists in the set: we use unordered_set::count(A) to get the number of elements equal to A (either 0 or 1 since we use a non-multi set). As an idiom, on non-multi associative containers, container::count(Key) gives 1 (say, true) if Key exists, 0 (say, false) otherwise.

On average, this solution has linear time complexity.

Another approach that uses no extra space and that involves sorting exists. Can you find it?

That’s enough for now.

Recap for the pragmatic C++ competitive coder:

  • Don’t reinvent containers whenever standard ones fit your needs. Consider STL associative containers:
    • std::map, std::set, std::multimap and std::multiset are sorted, generally implemented as self-balancing binary-search trees;
    • std::unordered_map, std::unordered_setstd::unordered_multimap and std::unordered_multiset are not sorted, imlemented as hash tables;
  • Understand idioms to adoperate STL associative containers effectively:
    • Does an element with key equivalent to K exist? Use count(K);
    • Where is the element with key equivalent to K? Use find(K);
    • Where should the element with key equivalent to K be inserted? Use lower_bound(K) on sorted containers;
    • Insert a new element: use emplace, insert;
    • Insert a new element, knowing where: use emplace_hint, insert (only sorted containers take advices effectively,  unordered ones are presumptuous!);
    • Insert or update an element: operator[] + operator=, (C++17) insert_or_assign, equal_range + hint insertion (this is also for interface compatibility between ordered/unordered), lower_bound + hint insertion (only on sorted containers);
    • Get the element corresponding to key equivalent to K, or default if not present: use operator[K];
    • Get the element corresponding to key equivalent to K, or exception if not present: use at(K);
    • Erase elements: use erase(K), erase(first, last), erase(it).
  • Understand the difference between containers member functions and STL generic algorithms. For example:
    • std::find and $associative_container::find do different searches, using different criteria;
    • std::lower_bound and $sorted_container::lower_bound do the same, but the former performs worse than the member function because the latter works in terms of the container internal details and its structure.
  • Exploit standard algorithms properties. For example:
    • std::max_element and std::min_element are “stable”: max/min returned is always the first one found.
  • Prefer standard algorithms to hand-made for loops:
    • std::max_element/min_element, to get the first biggest/smallest element in a range;
    • std::count/count_if, to count how many elements satisfy specific criteria;
    • std::find/find_if, to find the first element which satisfies specific criteria;
    • std::lower_bound, std::upper_bound, std::equal_range, to find the “bounds” of an element within a sorted range.

I apologize if I didn’t publish any new posts in May but I was busy with the organization of the Italian C++ Conference 2016. If you feel like reading some lines about this event, check out this post.

In this installment of “C++ in Competitive Programming” we’ll meet a fundamental data structure in Computer Science, one that manages a sequence of characters, using some encoding: a stringAs usual, let me introduce this topic through a simple challenge:

A palindrome is a word, phrase, number, or other sequence of characters which reads the same backward and forward. Given a string print “YES” if it is palindrome, “NO” otherwise. The string contains only alphanumeric characters.

For example:

abacaba

is palindrome; whereas:

abc

is not.

We need a type representing a sequence of characters and in C++ we have std::string, the main string datatype since 1998 (corresponding to the first version of the ISO C++ standard – known as C++98). Under the hood, imagine std::string as a pointer to a null-terminated (‘\0’-terminated) char array. Here is a list of useful facts about std::string:

  • std::string generalizes how sequences of characters are manipulated and stored;
  • roughly speaking, it manages its content in a similar way std::vector does (e.g. growing automatically when required);
  • apart from reserve, resize, push_back, etc. std::string provides typical string operations like substr, compare, etc;
  • it’s independant on the type of encoding used: all its members, as well as its iterators, will still operate in terms of bytes;
  • implementers of std::string generally embeds a small array in the string object itself to manage short strings.

The latter point is referred as the Small String Optimization and it means that short strings (generally 15/22 chars) are allocated directly in the string itself and don’t require a separate allocation (thanks to this guy on reddit who pointed out I was wrong here). Have a look at this table which shows the maximum size that each library implementation stores inside the std::basic_string.

The problem description states that the input is in the form:

string-till-end-of-stream

Thus we are allowed to simply read the string this way:

Usually in CC a string is preceded by its length:

N string-of-len-N

In this case we don’t need to read N at all:

That will skip every character until the first whitespace and then will read the string that follows the whitespace into S.

Now let’s face with the problem of determining if S is palindrome or not.

It should be easy to understand that S is palindrome if reverse(S) is equal to S. So let’s start coding such a naive solution:

As you can see, we access characters as we do with an array. As std::vector, std::string makes it possible to use also iterators and member functions to access individual characters. In the last line we applied operator== to verify if “char-by-char S is strictly equal to tmp”. We could also use string::compare() member function:

compare() returns a signed integral indicating the relation between the strings: 0 means they compare equal; a positive value means the string is lexicographically greater than the passed string; a negative value means the string is lexicographically lesser than the passed string. This function supports also comparison with offsets: suppose we want to check if a string is prefix of another (that is, a string starts with a certain prefix). This is the most effective way to do that:

Bear this trick in mind.

compare() supports also overloads with C-strings, preventing implicit conversions to std::string.

Now let’s turn back to reversing the string. Actually we don’t need to write a for loop manually because reversing a range is a common function already provided by the STL. Including <algorithm> we  get the algorithms library that defines functions for a variety of purposes (e.g. searching, sorting, counting, manipulating) that operate on ranges of elements. To reverse in-place a sequence we adoperate std::reverse:

Since we don’t want to modify the original string, we can use std::reverse_copy to copy the elements from the source range to the destination range, in reverse order:

Here – as for std::vector – we have two options: creating an empty string, reserving enough space and then pushing letters back, or creating a properly-sized and zero-initialized string and then assigning every single char. Since char is a cheap data type, the latter option is generally faster (basically because push_back does some branching to check if the character to add fits the already initialized sequence). For this reason I prefer filling a string this way. As I pointed out in the previous post, a reader from reddit suggested to use this approach also for std::vector<int> and honestly I agree. Larger types may have a different story. Anyway, as usual I suggest you to profile your code when in doubt. For Competitive Programming challenges this kind of finess makes no difference.

This solution is a bit more efficient than the previous one because it’s only two passes (one pass for reverse_copy and another one for operator==). We have also got rid of the explicit loop. What it really makes this solution bad is the usage of that extra string. If we turn back to the initial for loop, it should be clear that we just need to check if each pair of characters that would be swapped are the same:

S = abacaba
S[0] == S[6]
S[1] == S[5]
S[2] == S[4]
S[3] == S[3]

That is, with i from 0 to N/2 we check that:

S[i] == S[N-i-1]

Applying this idea:

Ok, this solution seems better. Can we do even better? Sure. Let’s meet another standard function.

Some algorithms operate on two ranges at a time: std::equal belongs to this family. A function that checks if two ranges are strictly the same:

This function returns true if the elements in both ranges match. It works by making a pair-wise comparison, from left to right. By default the comparison operator is operator==. For example:

string c1 = "ABCA", c2="ABDA";
'A' == 'A' // yes, go on
'B' == 'B' // yes, go on
'C' == 'D' // no, stop

The comparison can be customized by passing a custom predicate as last parameter.

Now, consider the problem of checking if a string is palindrome. Our loop compares the first and the last character, the second and the second last character, and so on. If it finds a mismatch, it stops. It’s basically std::equal applied to S in one direction and to reverse(S) in the other. We just need to adapt the second range to go from the end of S to the beginning. That’s a job for reverse iterators:

Reverse iterators goes backward from the end of a range. Incrementing a reverse iterator means “going backward”.

There is only one thing left: this solutions now makes N steps, wheareas only N/2 are really needed. We perform redundant checks. For instance:

abacaba
[0, 6]: a
[1, 5]: b
[2, 4]: a
[3, 3]: c // middle point
[4, 2]: a (already checked [2, 4])
[5, 1]: b (already checked [1, 5])
[6, 0]: a (already checked [0, 6])

Taking this consideration into account we get:

std::next returns a copy of the input iterator advanced by N positions (this version does not require random access iterators).

We finally have a one-liner, single-pass and constant space solution.

I apologize if it took a while to get here: not only I introduced some string functions, but I also incrementally approached to the problem showing how simple operations can be written in terms of standard algorithms. This process is precious since it helps to get familiar with the standard. Sometimes it does not make sense to struggle to find an algorithm that fits a piece of code, other times it does. The more you use the standard, the more it will be easy for you to identify these scenarios pretty much automatically.

Don’t worry, in future posts I’ll skip trivial steps.

Now let me raise the bar, just a little bit.

Palindrome index

 

In Competitive Programming many variations of this topic exist. This is an example:

Given a string, determine the index of the character whose removal will make the string a palindrome. Suppose a solution always exists.

For example:

acacba

if we remove the second last character (‘b’), we get a palindrome

This problem – known as palindrome index – can be solved by introducing another useful algorithm that actually works like std::equal but it also returns the first mismatch, if any, instead of a bool. Guess the name of this algorithm…yeah, it’s called std::mismatch.

This problem is quite easy to solve:

  • locate the first char that breaks the “palindromeness” – call that position (mismatch)
  • check if the string was palindrome if the m-th char was removed
  • if so, the palindrome index is m, otherwise it is N – m – 1 (basically, the char “on the other side”)

Since this solution can be implemented in many ways I take advantage of this opportunity to introduce other string operations. First of all, here is how std::mismatch works:

You see that ‘C’ and ‘X’ results in a mismatch. mism is a std::pairmism.first is S1.begin() + 2 and mism.second is S2.begin() + 2. Basically, they point to ‘C’ in the first string and to ‘X’ in the second. Suppose now we need to find this “palindrome index”. Consider as an example:

mism.first points to ‘c’ and mism.second points to ‘b’. Since we know a solution always exists, either of these chars makes S not palindrome. To determine which one, we need to check if S without the mismatch point mism was palindrome or not. For this check, we create a new string from the concatenation of two substrings of S:

  • From the beginning to mism-1, and
  • From mism+1 to the end

Although I don’t like this solution so much, I have browsed others (not only written in C++) on HackerRank and this resulted the most popular. So let me show you my translation into C++ code:

Let me introduce you substr()S.substr(pos, count) returns the substring of S that starts at character position pos and spans count chars (or until the end of the string, whichever comes first) – S[pos, pos + count). If pos + count extends past the end of the string, or if count is std::string::npos, the returned substring is [pos, size()). For example:

substr results in “ell”.

It’s now evident that toCheck consists in the concatenation of S from 0 to diffIdx-1 and from diffIdx + 1 to the end:

acacba -> diffIdx = 1 
toCheck = a + acba = aacba

Only for completeness, another (possibly more efficient) way to obtain toCheck consists in adoperating std::copy:

This solutions works and passes all tests, but I find it annoying to use extra space here.

Suppose we are free to modify our original string: it’s easier (and possibly faster) to remove the candidate iterator by using string::erase:

This avoids both creating extra (sub)strings and allocating extra memory (note that the internal sequence of characters can be relocated into the same block). The final part of the algorithm is similar:

The final cost of this solution is linear.

Now, what if we cannot change the string?

A solution consists in checking two substrings separately. Basically we just need to exclude the mismatch point and then check if the resulting string is palindrome, then we check:

  • From the beginning of S to the mismatch point (excluded) with the corresponding chars on the other side;
  • From one-past the mismatch point to the half of S with the corresponding chars on the other side.

Actually, the first check is already performed when we call mismatch, so we don’t need to repeat it.

To code the second check, just remember the second string goes from diffIt + 1 to the half of the string. So, we just need to correctly advance the iterators:

Let’s see this snippet in detail: next(diffIt) is just diffIt + 1. begin(S) + S.size()/2 is just the half of S. The third iterator, rbegin(S) + diffIdx, is the starting point of the string on the other side. Here is the complete solution:

If you followed my reasoning about positions then it’s just the matter of applying standard algorithms with some care for iterators.

You may complain this code seems tricky, so let me rewrite it in terms of explicit loops:

In the STL-based solution we clearly need to think in terms of iterators. The mismatch part is trivial (actually it could be replace with a call to std::mismatch, as in the STL-based solution), but the calls to std::equal are a little bit more difficult to understand. At the same time, it should be evident that std::equal checks that two ranges are identical. Nothing more. Also, if we replace std::string with another data structure that provides iterators, our code won’t change. Our algorithm is decoupled from the structure of the data.

On the other hand, in the for-based approach the logic is completely hidden inside the iterations and the final check. Moreover, this code depends on the random-access nature of the string.

Judge yourself.

 

Conversions

 

This short section is dedicated to conversions between strings and numeric types. I will start by saying that, in terms of speed, the following functions can be beated given certain assumptions or in particular scenarios. For example, you maybe remember Alexandrescu’s talk about optimization (and here is a descriptive post) where he shows some improvements on string/int conversions. In CC the functions I’ll introduce are generally ok. It can happen that in uncommon challenges it’s required you to take some shortcuts to optimize a conversion, mainly because the domain has some particularities. I’ll talk about domain and constraints in the future.

The STL provides several functions for performing conversions between strings and numeric types. Conversions from numbers to string can be easily obtained since C++11 through a set of new overloads:

A disadvantage of this approach is that we pay a new instance of std::string every time we invoke to_string. Sometimes – especially when many conversions are needed – this approach is cheaper:

Or use vector<char> for allocating the string dynamically.

char_needed is the maximum number of chars needed to represent an int32. This value is basically:

From C++17 we’ll have string_span to easily wrap this array into a string-like object:

Moreover, from C++17 we’ll have string::data() as non-const member, so we’ll be able to write directly into a std::string:

In CC sprintf is good enough, even if sprintf_s (or another secure version) is preferred.

Anyhow, prefer using std::to_string if the challenge allows that.

Conversions in the other direction are a bit more confusing because we have both C++11 functions and C functions. Let me start just by saying that rolling a simple algorithm to convert a string into an unsigned integer is easy, pretty much elegant and interesting to know about:

To convert to an int32 we just need to handle the minus sign:

nxt – ‘0’ is an idiom: if digit is a char in [0-9], digit – ‘0’ results in the corresponding integral value. E.g.:

'1' - '0' = 1 (int)

The inverse operation is simply char(cDigit + ‘0’). E.g.:

char(1 + '0') = '1' (char)

In C++ (as in C) adding an int to a char will result in an int value: for this reason a cast back to char is needed.

With these snippets we are just moving through the ASCII table. ‘1’ – ‘0’ represents how far ‘1’ is from ‘0’, that is 1 position. 1 + ‘0’ is one position after ‘0’, that is ‘1’. With this idea in mind we can easily perform trivial lowercase to uppercase conversions:

// only works if c is lowercase
char(c - 'a' + 'A')

And viceversa:

// only works if c is uppercase
char(c - 'A' + 'a')

Anyhow, as one guy commented on reddit, the ASCII table is designed in such a way just flipping one bit is needed to get the same results:

// assuming c is a letter
char toLower(char c) { return c | 0x20; }
char toUpper(char c) { return c & ~0x20; }

But remember that the C++ standard (from C, in <cctype>already provides functions to convert characters to upper/lower case, to check if one character is upper/lower case, digit, alpha, etc. See here for more details.

In CC, these tricks should be kept on hand. For example, this challenge requires to implement a simple version of the Caesar cipher:

Given a string S of digits [0-9], change S by shifting each digit by K positions to the right.

For example 1289 with K = 3 results in 4512.

We can easily solve this task by applying the tricks we have just learned:

Note I used a range-based for loop even if a standard algorithm could help solve this problem. I don’t introduce it yet though.

Now, let’s see some standard functions to convert strings to numbers. Since C++11 we have ‘sto’ functions (and for unsigned values and floating points) which convert a std::string/std::wstring into numeric values (they support also different basis). Being STL functions, they throw exceptions if something goes wrong: an std::invalid_argument is thrown if no conversion could be performed, std::out_of_range is thrown if the converted value would fall out of the range of the result type or if the underlying function (std::strtol or std::strtoll) sets errno to ERANGE. For example:

This family of functions optionally outputs the number of processed characters:

On the other hand, C functions don’t throw exceptions, instead they return zeros in case of errors. For instance:

That’s enough for now.

Recap for the pragmatic C++ competitive coder:

  • Don’t reinvent containers whenever standard ones fit your needs:
    • Consider std::string to handle a sequence of characters
      • std::string::compare indicates the relation between strings
      • std::string::substr creates substrings
  • Prefer standard algorithms to hand-made for loops:
    • std::copy, to copy the elements in a range to another
    • std::reverse, to reverse the order of the elements in a range
    • std::reverse_copy, to copy the elements in a range to another, but in reverse order
    • std::equal, to know if two ranges are the same
    • std::mismatch, to locate the first mismatch point of two ranges, if any
  • Understand options for converting strings to numbers, and viceversa:
    • std::to_string, to convert numeric values into strings (a new instance of std::string is returned)
    • std::array (std::string in C++17) + C’s sprintf (or equivalent – e.g. secure _s version) when reusing the same space is important
    • std::sto* functions to translate strings into numeric values (remember they throw exceptions)
    • C’s atoi & friends when throwing exceptions is not permitted/feasible
    • Rememeber tricks to convert char digits into ints and viceversa:
      • digitAsChar – ‘0’ = corresponding int
      • char ( digitAsInt + ‘0’ ) = corresponding char

One of the first challenges in the HackerRank‘s “Warmup” section is probably the “Hello World” of learning arrays in any language: calculating the sum of a sequence of elements. Although this exercise is trivial, I’ll face with it to break the ice and show you a few concepts that lay the groundwork for more complicated challenges.

I’m assuming you are already familiar with concepts like iterator, container and algorithm. Most of the time I’ll give hints for using these C++ tools effectively in Competitive Programming.

That’s the problem specification: You are given an array of integers of size N. Can you find the sum of the elements in the array? It’s guaranteed the sum won’t overflow the int32 representation.

First of all, we need an “array of size N”, where N is given at runtime. The C++ STL (Standard Template Library) provides many useful and cleverly designed data structures (containers) we don’t need to reinvent. Sometimes more complicated challenges require us to write them from scratch. Advanced exercises reveal less common data structures that cannot be light-heartedly included into the STL. We’ll deal with some examples in the future.

It’s not our case here. The primary lesson of this post is: don’t reinvent the wheel. Many times standard containers fit your needs, especially the simplest one: std::vector, basically a dynamic sequence of contiguous elements:

For the purpose of this basic post, here is a list of important things to remember about std::vector:

  • it’s guaranteed to store elements contiguously, so our cache will love it;
  • elements can be accessed through iterators, using offsets on regular pointers to elements, using the subscript operator (e.g. v[index]) and with convenient member functions (e.g. at, front, back);
  • it manages its size automatically: it can enlarge as needed. The real capacity of the vector is usually different from its length (size, in STL speaking);
  • enlarging that capacity can be done explicitly by using reserve member function, that is the standard way to gently order to the vector: “get ready for accommodating N elements”;
  • adding a new element at the end of the vector (push_back/emplace_back) may not cause relocation as far as the internal storage can accommodate this extra element (that is: vector.size() + 1 <= vector.capacity());
  • on the other hand, adding (not overwriting) an entry to any other position requires to relocate the vector (eventually in the same block of memory, if the capacity allows that), since the contiguity has to be guaranteed;
  • the previous point means that inserting an element at the end is generally faster than inserting it at any other position (for this reason std::vector provides push_back, emplace_back and pop_back member functions);
  • knowing in advance the number of elements to store is an information that can be exploited by applying the synergic duo reserve + push_back (or emplace_back).

The latter point leads to an important pattern: inserting at the end is O(1) as far as the vector capacity can accommodate the extra element – vector.size() + 1 <= vector.capacity(). You may ask: why not enlarging the vector first and then just assign values? We can do that by calling resize:

resize enlarges the vector up to N elements. The new elements must be initialized to some value, or to the default one – as in this case. This additional work does not matter in this challenge, however initialization may – in general – cause some overhead (read, for example, these thoughts by Thomas Young). As a reader pointed out on reddit, push_back hides a branching logic that can cause some cost. For this reason he suggests that two sequential passes over the data (that is contigous) may be faster. I think this can be true especially for small data, however the classical recommendation is to profile your code in case of such questions. In my opinion getting into the habit of using reserve + *_back is better and potentially faster in general cases.

The heart of the matter is: need a dynamic array? Consider std::vector. In competitive programming std::vector is 99% of the time the best replacement for a dynamic C-like array (e.g. T* or T**). 1% is due to more advanced challenges requiring us to design different kind of dynamic arrays that break some std::vector’s guarantees to gain some domain-specific performance. Replacing std::vector with custom optimized containers is more common in real-life code (to have an idea, give a look for example here, here and here).

If N was given at compile-time, a static array could be better (as far as N is small – say less than one million – otherwise we get a stack overflow). For this purpose, std::array is our friend – basically a richer replacement of T[]. “Richer replacement” means that std::array is the STL-adaptation of a C-array. It provides member functions we generally find in STL containers like .size(), .at(), .begin()/.end(). std::array combines the performance and accessibility of a C-style array with the benefits of a standard container. Just use it.

Since much information is stated in the problem’s requirements, we’ll see that static-sized arrays are extraordinarily useful in competitive programming. In the future I’ll spend some time about this topic.

Now, let’s look at my snippet again: can we do better? Of course we can (from my previous post):

At this point we have the vector filled and we need to compute the sum of the elements. A hand-made for loop could do that:

Can we do better?

Sure, by using the first numeric algorithm of this series: ladies and gentlemen, please welcome std::accumulate:

One of the most important loops in programming is one that adds a range of things together. This abstraction is known as reduction or fold. In C++, reduction is mimicked by std::accumulate. Basically, it accumulates elements from left to right by applying a binary operation:

accumulate with three parameters uses operator+ as binary operation.

std::accumulate guarantees:

  • the order of evaluation is left to right (known also as left fold), and
  • the time complexity is linear in the length of the range, and
  • if the range is empty, the initial value is returned (that’s why we have to provide one).

The reduction function appears in this idiomatic form:

So the result type may be different from the underlying type of the range (ElementType). For example, given a vector of const char*, here is a simple way to calculate the length of the longest string by using std::accumulate (credits to Davide Di Gennaro for having suggested this example):

To accumulate from the right (known as right fold) we just us reverse iterators:

Right fold makes some difference – for example – when a non-associative function (e.g. subtraction) is used.

In functional programming fold is very generic and can be used to implement other operations. In this great article, Paul Keir describes how to get the equivalent results in C++ by accommodating std::accumulate.

Does std::accumulate have any pitfalls? There exist cases where a+=b is better than a = a + b (the latter is what std::accumulate does in the for loop). Although hacks are doable, I think if you fall into such scenarios, a for loop would be the simplest and the most effective option.

Here is another example of using std::accumulate to multiply the elements of a sequence:

std::multiplies<> is a standard function object (find others here).

Using standard function objects makes the usage of algorithms more succinct. For example, the problem of finding the missing number from an array of integers states: given an array of N integers called “baseline” and another array of N-1 integers called “actual”, find the number that exists in “baseline” but not in “actual”. Duplicates may exist. (this problem is a generalization of the “find the missing number” problem, where the first array is actually a range from 0 to N and a clever solution is to apply the famous Gauss’ formula N(N+1)/2 and subtracting this value from the sum of the elements “actual”). An example:

The missing number is 2.

A simple linear solution is calculating the sum of both the sequences and then subtracting the results. This way we obtain the missing number. This solution may easily result in integer overflow, that is undefined behavior in C++. Another wiser solution consists in xor-ing the elements of both the arrays and then xoring the results.

Xor is a bitwise operation – it does not “need” new bits – and then it never overflows. To realize how this solution works, remember how the xor works:

Suppose that “a” is the result of xor-ing all the elements but the missing one – basically it’s the result of xor-ing “actual”. Call the missing number “b”. This means that xor-ing “a” with the missing element “b” results in xor-ing together the elements in the “baseline” array. Call the total “c”. We have all the information to find the missing value since “a” ^ “c” is “b”, that is just the missing number. That’s the corresponding succint C++ code:

Let’s go back to the initial challenge. We can do even better.

To realize how, it’s important to get into the habit of thinking in terms of iterators rather than containers. Since standard algorithms work on ranges (pairs of iterators), we don’t need to store input elements into the vector at all:

Advancing by one – using next – is a licit action since the problem rigorously describes what the input looks like. This snippet solves the challenge in a single line, in O(n) time and O(1) space. That’s pretty good. It’s also our first optimization (actually not required) since our solution dropped to O(1) space – using std::vector was O(n).

That’s an example of what I named “standard reasoning” in the introduction of this series. Thinking in terms of standard things like iterators – objects making algorithms separated from containers – is convenient and it should become a habit. Although it seems counterintuitive, from our perspective of C++ coders thinking in terms of iterators is not possible without knowing containers. For example we’ll never use std::find on std::map, instead we’ll call the member function std::map.find(), and the reason is that we know how std::map works. In a future post I’ll show you other examples about this sensible topic.

Our solution leads to ranges naturally:

view::tail takes all the elements starting from the second (again, I skipped the input length), and ranges::istream is a convenient function which generates a range from an input stream (istream_range). If we had needed to skip more elements at the beginning, we would have used view::drop, which removes the first N elements from the front of a source range.

Iterators-based and ranges-based solutions version look very similar, however – as I said in the introduction of this series – iterators are not easy to compose whereas ranges are composable by design. In the future we’ll see examples of solutions that look extremely different because of this fact.

In Competitive Programming these single-pass algorithms are really useful. The STL provides several single-pass algorithms for accomplishing tasks like finding an element in a range, counting occurrences, veryfing some conditions, etc. We’ll see other applications in this series.

In the next post we’ll meet another essential container – std::string – and we’ll see other algorithms.

Recap for the pragmatic C++ competitive coder:

  • Don’t reinvent containers whenever standard ones fit your needs:
    • Dynamic array? (e.g. int*) Consider std::vector
    • Static array? (e.g. int[]) Use std::array
  • Prefer standard algorithms to hand-made for loops:
    • often more efficient, more correct and consistent,
    • more maintainable (a “language in the language”)
    • use standard function objects when possible
    • use std::accumulate to combine a range of things together
    • If customizing an algorithm results in complicated code, write a loop instead
  • Think in terms of standard things:
    • iterators separate algorithms from containers
    • understand containers member functions

Welcome back to my series about Competitive Programming. Here is the introduction in case you missed it.

In this post I’ll explain some common idioms to deal with input and output.

In C++ a simple task like reading an integer from the standard input can be done in different ways: using streams, C functions or OS-dependant calls. The streams model offers a pretty high level interface, but it is generally slower than using native operating system calls. However, in different cases, it is acceptable.

I have solved a lot of challenges and very rarely I had to switch to C functions (e.g. scanf) or turn off the synchronization between C++ streams and standard C streams after each input/output operation (by using std::ios_base::sync_with_stdio). Most of the time the I/O is not the point of the exercise then we can use the convenient streams model. This point seems irrelevant but it brings about simplification, enabling us to write not only simpler but also safer idioms. We’ll see that in some cases the streams interface is enough for basic parsing as well.

Due to the nature of the challenges – generally not focusing on I/O – I remark that the following idioms are pragmatic: they “work here”, but they may not fit a production environment where streams are often avoided like the plague. As I have read in a comment on reddit: “I fear that it may lead some people to prefer quick and functional solutions that work for some cases, neglecting important traits such as scalability, performance or cross-platform compatibility”. I think that solving challenges is about prototyping a solution which has to cover some sensible scenarios cleverly set up by challenge moderators. “Production” is another business. Definitely. Requirements evolve, scalability matters, etc. Being well-versed in Competitive Programming is not enough, sure, but I think that solving challenges is an entertaining way to face new problems that maybe – at some point – you’ll deal with for “real life purposes”.

I repeat a point from the introduction that I consider the most important: competitive programming is guaranteed brain excercise: “on any given day, what you’re asked to do at work may or may not be challenging, or you may be exercising one of the many non-technical skills required of software developers. By practicing regularly with competitive programming problems, you can ensure that the coding part of your brain receives a regular workout. It’s like one of those brain training apps, but for real skills”. In addition to this, other eleven reasons are wisely explained in this nice article which I recommend again.

After this short additional introduction, let me show some code about I/O.

Just for completeness, in the simplest cases you only need to read lonely values like numbers or strings:

int N, M;
cin >> N >> M;

Remember in these challenges you don’t need to validate the input (unless stated otherwise – never found).

In one of the most common scenarios you need to read a sequence of numbers, and you can do it by using the synergetic copy_n + istream_iterator + back_inserter trio. That’s our first idiom. Generally the length of the sequence is passed just before (and the numbers are separated by whitespaces):

reserve prepares the vector to host “length” elements – it’s possibly allocating memory. A note: I’d really prefer writing something like vector<int> sequence(reserve, length) where “reserve” here is just a tag. The same applies for resize, etc. And this would avoid initializer lists overload predominance:

I used copy_n instead of copy because not only is it clearer, but also convenient in case of more input to read (if I used copy then I would need to pass an end-of-stream iterator, but passing istream_iterator<int>() is dangerous because copy goes on iterating until the stream gets invalid).

With ranges the code streamlines:

bounded returns a range of exactly length elements, in this case from the standard input (view::take is another possibility).

Since a default operator>> for vector is not defined, reading a matrix needs manual iteration (here using copy_n is even more convenient):

Remember we don’t want to create a library, because it couldn’t be used to solve challenges. For this reason we don’t introduce an operator>> for vector.

Other cases involve strings, not just words but also lines. Pay attention to getline: as I have recently blogged (for another reason, but the lesson learned was the same), getline is an unformatted function. This means it does not ignore leading delimiters (newline by default). These innocent lines can lead to something you may don’t expect:

int N; string line; 
cin >> N; getline(cin, line);

The problem here is that we want to ignore the separator between the int and the line. E.g.:

10'\n'
a line representing some data

Since getline does not skip the leading ‘\n’, it will stop immediately, writing nothing into the string.

This problem is easy to solve by passing std::ws (a manipulator which discards leading whitespaces from an input stream) :

This succint version is valid too:

And here is how the ranges join the party:

In another recurring pattern the input appears in this form:

N T
a1 a2 ... aN
t1
t2
...
tT

N is the length of a sequence of numbers, T is the number of test cases. Generally this kind of challenges require you to operate on the sequence with some kind of clever pre-processing and to use each test-case value (or values) to perform an operation quickly. For example, for each test-case value t, output the sum of the first t elements in the sequence.

Here is the simplest code to use for reading inputs in this scenario:

 

We’ll meet again this problem later in this series.

More complex input patterns can be handled by combining the examples above.

 

Printing answers

 

Just a few lines on output. Obviously, by using std::cout and some convenient standard abstractions like ostream_iterator.

Often the answers are just one number or two. In this case, just send them to the standard output applying the required formatting rules (e.g. space separated):

I generally don’t use endl because it causes a stream flush and this may negatively affect the performance. For this reason I use just “\n” out of habit. In certain situations a printf is probably shorter, but it’s still error prone: not only if you write a wrong format, but also if you forget to update it in case you change a type (e.g. imagine you solve a problem by using int and then you change it to unsigned long long). With streams you don’t need to change anything because they automatically pick the correct operator<< overload.

When you need to print a sequence – like a vector – just use copy + ostream_iterator:

Note that the last separator is printed after the back element. This means extra care is needed to avoid it. For example:

Maybe in C++17 we’ll use the trailblazing ostream_joiner to save the extra line – since the “last” separator is not outputted at all:

Another worthwhile scenario is when you need to print floating point numbers and the output is expected to be an approximation of a certain number of digits. fixed and setprecision are good friends in this case. For example:

will print num1 and num2 with exactly two digits after the comma:

will print:

1.00 1.24

In case you need to print in a loop, it’s a good habit to set stream manipulators once, before the iteration:

I’ll be back on some standard mathematical utilities (e.g. rounding) later in this series.

 

Pushing streams to the limit

 

Before concluding, it’s worth sharing a couple of “beyond-the-basics” examples. Sometimes just configuring streams options is enough for solving some problems.

The first challenge required to tokenize a string based on given delimiters and to print the obtained tokens to the standard output, one per line. In other languages like Java you could use a StringTokenizer – indeed lots of Java guys use this opportunity on CC websites. Note that more complex challenges where parsing is the main topic does not allow you to adoperate standard things like streams or tokenizers (in other languages) – and they won’t be as efficient as the challenge requires!

This solution is not intended for production. So please don’t post comments like “streams are slow” or “who uses streams to tokenize?”. Here we have a different scenario. This code can be easily plugged into challenges requiring some basic parsing. By default streams skip whitespaces, here we need to skip also other delimiters.

Ranges library provides views like split and delimit for this kind of things:

Anyhow, let me go back to my modest C++14 empty sheet on HackerRank, I have only 30′ to complete the contest and Java boys are submitting two-line solutions…should I roll my own mini-parser?

C++ has a (a bit old-fashioned?) concept called facet to handle specific localization aspects. Basically, each facet corresponds to a precise localization setting that can be tuned. For example, the ctype facet encapsulates character classification features – it can answer questions like “does this char is a space?” or “does this char is a digit?”. One can inherit from a base facet and change default behavior.

For our task we can inherit from ctype<char>, a specialization of std::ctype which encapsulates character classification features for type char. Unlike general-purpose std::ctype, which uses virtual functions, this specialization uses table lookup to classify characters (which is generally faster). Here is a solution to the challenge:

This requires a bit of clarification if you don’t know facets at all: ctype<char> uses a table to associate each char to its “classification” (this table is, indeed, called classification table). The table size is standard (ctype<char>::table_size – at least 256). We just need to set char delimiters to ctype_base::space. All std::istream formatted input functions are required to use std::ctype<char> for character classing during input parsing. For example, istream::operator>> asks to ctype::is if a char is a space (aka a delimiter). Under the hood ctype<char> lookups the internal table.

Don’t worry about the lonely custom_delims allocation, the locale guarantees that the new facet will be deleted as soon as it’s not referenced anymore (facets are reference counted – another possible performance penalty, in addition to virtual function calls).

Although I never use such approach for parsing in my production code, in Competitive Programming it may be acceptable. For example, on HackerRank I submitted this code against a test-case requiring to split a string of 400’000 chars and it took (output included) 0.05s – within the time requirement of that challenge. And this code is easily reusable.

I promised two examples. The other was about number punctuation: given an integer number, print the string representation of the number, comma separated, supporting different comma styles – e.g. US 1000000 is 1,000,000, instead Indian 1000000 becomes 10,00,000. Another time, we may use the standard localization support:

The code is quite self-explanatory. For more details I encourage you to have a look at numpunct facet.

Hope you enjoyed this post because sometimes stream capabilities suffice in such competitions – consider them before rolling your own alternatives.

 

Summary

 

A brief recap of some weapons to add to our “pragmatic C++ competitive programmer toolkit”:

  • Usually I/O is not the point of the exercises, so using streams is fine
  • To read a sequence of values into a vector use:
    • vector::reserve(N), and
    • copy_n + istream_iterator + back_inserter
  • To print sequences use copy + ostream_iterator
  • To print floating points rounded to the nth-digit use fixed << setprecision(n)
  • If parsing is not the point of the challenge, consider localization features

In the last two years I have turned into a Competitive Programming aficionado, mostly active on HackerRank, less on TopCoder and CodingGame. This kind of websites enable coders not only to compete with others, but also to practice by solving several categorized exercises. After you write your program into the browser, they run and test it against some prepared test-cases. In all probability you can use your favorite language because they support pretty much everything, but I consider really precious spending time with other languages as well.

When I started I just wanted to challenge myself and push my brain beyond its limits. I really appreciated that facing with some kind of problems was a good excuse to explore new areas of CS and learn something new. Even if at that time it was not my target, competitive programming is a great way to prepare for technical job interviews.

Speaking in general, there are several reasons to get involved in Competitive Programming. I found a very nice article about that here. I quote the point that was the most important for me when I started: it’s guaranteed brain excercise: “On any given day, what you’re asked to do at work may or may not be challenging, or you may be exercising one of the many non-technical skills required of software developers. By practicing regularly with competitive programming problems, you can ensure that the coding part of your brain receives a regular workout. It’s like one of those brain training apps, but for real skills.”

Regardless of your programming language, spend some time on competitive coding.

Why C++ in Competitive Programming?

 

If you solve on websites like HackerRank a challenge then you may peek into other people solutions. Looking at C++ solutions, I have found a lot of “C++ people” using C++ mostly as C. Many people don’t consider (don’t know?) what the standard offers. On the other hand, I find it natural to face challenges with modern C++ by my side, coding as simply as possible. For example, often I ask myself: may I use an algorithm instead of this for loop? This attitude is worthwhile here as it has been for years in my daily job.

I realize the word modern is overloaded: in the last years we all have probably heard the expression modern C++ everywhere and sometimes it looked like a buzzword. I mean using the full standard toolset of C++, without external libraries, nor over complicated abstractions. Mainly because in competitive programming you don’t have libraries (sometimes you don’t have the standard library neither) and you cannot lose time with too much generalization. Many times the same results fit real life development.

It’s worth noting that I’m not going to explain things like “how to make a linked list”. Excellent algorithms & ds courses/books/series have been developed for this purpose. Rather, expect to see how I used a forward list to solve a problem, where I found necessary a multiset, or when lower_bound saved my life.

I have told to some friends about this series and they suspected that C++ was not the right language for talking about Competitive Programming. Actually I had a look at the top ranked people on websites like TopCoder and HackerRank and C++ was always there (together with Java and Python, mainly). I found another indicative example on Google Code Jam: more of the higher-performing contestants code in C++.

I’m not surprised at all.

Certainly C and C++ are considered the lingua franca of algorithms and data-structures, but I think the main reason is the control C++ offers and its independence of the paradigm:  it does not force you a single “way of doing” to solve a challenge.

My friend Alessandro Vergani has another sensible idea: traditionally C and C++ have a poor standard library compared to newer languages like Java and C#, for this reason C/C++ programmers are at ease with writing everything they need from scratch – and often too much. This means they are fluent with coding things that in other languages are took for granted.

Moreover, I think since lots of participants are students, C++ is very common as a “first language”.

 

Challenges are about trade-off

 

My purpose is solving challenges without reinventing the wheel. This means if I can use standard things to make all the test cases pass, I’ll do it. Obviously, I’ll justify my decisions and I’ll discuss a bit. For example, what about this snippet for pretty printing a non-zero unsigned 32-bit integer N into its binary representation:

It’s simple, but it’s not as efficient as it can be if written in other ways.

Does it suffice to solve the problem?

Maybe.

If so, I would be happy with this simple solution. If not, the challenge is asking me to find better ways to solve the problem.

Challenges are about trade-off. The requirements and the constraints of the problem matter a lot. They are a rough approximation of a real problem. Often you can exploit the constraints of the problem and take a shortcut. This may signify optimizations, or simplification, etc. Very often you have to exploit the constraints of the problem to solve it.

Many times these shortcuts are not so far from real life needs.

 

Standard reasoning

 

How many times you heard commands like “use standard algorithms” or “use a container”. With luck, we all know the classical resons for doing that: efficiency, correctness, maintainability. Now there is another fact we could care about: the future of our language.

Let me articulate.

Probably the biggest limitation of the STL is the lack of composability. One of the most mind-changing addition to C++17 will be Eric Niebler’s ranges. Ranges are basically a shift from iterators to a superior abstraction. Ranges enable programmers to write fluent and – hopefully – efficient algorithms.

Coding with ranges produces “pipelined” transformations. For example, to generate an infinite list of integers starting at 1, consider only even values, square them, take the first 10, and sum them:

It’s not my intention to talk about ranges and I am sure you rule them better than me (if not, give it a try and refer to the documentation). My point is: Eric’s proposal includes standard combinators like transform and take, and mutating ones like sort and unique. They are essentially the counterparts of those in the STL. Thinking in terms of standard blocks help embrace ranges effectively. If you are used to employ standard functions, you are already confident with several range views and algorithms.

When I can, I’ll try to show my solutions by using ranges as well. Clearly ranges are not part of the standard yet then these solutions are not intended to run on competitive programming sites yet. I’ll adoperate Clang on Visual Studio 2015 for my attempts.

 

Standard flexibility

 

Many people don’t employ standard algorithms massively. For some of them the reason is that algorithms are often closer to math than to programming – even if programming is math…

Consider the simple task of finding the minimum difference in a sequence of ascending sorted numbers. For example, in this sequence:

[10, 20, 30, 100, 200, 300, 1000]

10 is the minimum absolute difference (e.g. 20-10).

This is an explicit solution:

That solves the problem in a single pass.

The same problem written in Python:

minDiff = min([r - l for l, r in zip(elems, elems[1:])])

That is completely different because it employes a functional block that C++ misses: zip. Basically it combines elements pair-wise from N sequences (two in this case).

Look more closely at the C++ solution.

It’s basically an accumulation, right? In each iteration we calculate the difference between two adjacent elements and then we update the global minimum. But it’s not a std::accumulate because that function cannot operate on two sequences at the same time – at least without special iterators.

Any idea?

I am sure you found it. Your standard friend is inner_product:

We’ll turn back to inner_product later in this series because it’s powerful, but just to recap: the simplest overload of inner_product returns the result of accumulating a starting value with the inner products of the pairs formed by the elements of two ranges. That is:

init += range1[0]*range2[0] + range1[1]*range2[1] + ... + range1[N-1]*range2[N-1]

There is more: both sum (+) and multiplication (*) can be customized. In my example, I replaced accumulation with min and product with absolute difference. I also set the initial value to the maximum possible int value – in math I would set it to infinity, the identity of the minimum operation (e.g. min(infinity, value) is value). Note also that by passing [begin, end-1] and [begin+1, end] I basically obtained these “pairs”:

range[0], range[1]
range[1], range[2]
...
range[N-2], range[N-1]

inner_product has a name closer to math than to CS, but it is actually a general function, that hides a rich flexibility. When I say flexibility I am not saying composability. Flexibility means that the same function can be adapted to different scenarios. Standard functions generally have a great flexibility.

But flexibility can lead to misuses…

Suppose after we calculate the minimum difference we want to output the pairs of elements with that difference. Recalling the first example:

[10, 20, 30, 100, 200, 300, 1000]

The minimum difference is 10 and the pairs of elements differing by this value are:

10, 20
20, 30

So basically we need to iterate over adjacent elements and print them if their difference is 10:

[20-10, 30-20, 100-30, ...]

Is there any standard algorithm for this purpose?

std::transform is really close, but it does not really perform what we need mainly for two reasons: first of all, it does not guarantee in-order application of the custom function – or in-order iteration – and second, transformation is N to N, that is each element in the source gets mapped into another thing.

Ideally we need something like a zip and a copy_if or copy_if and a special iterator type wrapping two adjacent elements at the same time – things that are, for example, in boost. For this reason, I think a for loop is the best choice in “bare” C++:

Looking at this code, some C++ hackers may force std::equal to do the same thing, getting some sugar:

The sugar is given by not accessing the vector and not writing the for loop explicitly.

The problem is that the snippet is unclear and hackish: std::equal has not been designed to do such things. What a programmer expects is just a pair-wise comparison, eventually using a custom predicate. You know, don’t fall into the common “if all you have is a hammer everything looks like a nail”. Just say no. The more you practice with standard algorithms, the more they will pop up in case they match. Pretty much automatically.

Flexibility comes with responsibility. Standard algorithms constitute well-known patterns. They help understand the code because other people have clear expectations from them. Hacking standard algorithms can lead to disasters.

If you miss a block, at least two ways are doable: if you don’t need to reuse it, go with a local solution. If reuse is important (and generally it is) then write a function designed to be reused. This requires some care, and keeping in mind how standard C++ algorithms are designed can be a good starting point.

Since “some care” requires time, in competitive programming this is not always feasible. For this reason people tend to go with custom solutions, even for summing the elements of an array. In this series I have more time and there is nothing to prevent me to articulate and to show some solutions based on simple functions that C++ misses. Pragmatic competitive programmers generally keep such functions and snippets on hand. Sometimes the standard already provides such functions, and it’s just the matter of practicing.

Although standard algorithms are really basic, they support a rich flexibility, as I said. This means you can roll your own special iterator and solve the problem in a single pass – imagine also ostream supports operator<< for pairs:

Is it worth? There is not a single answer to this question. It’s really contextual, it depends on where you work at, if your codebase is too much complicated to need these abstractions, etc. Whenever you introduce (or use from a library) a new block (e.g. adj_it), it should be learnt by you and – hopefully – who works on your codebase.

Ranges should help a lot in these scenarios because in addition to a rich flexibility they have a strong composability. Let me turn the “partial C++17 support” on and show you my range-based solution to the initial problem:

It seems completely different from the inner_product one, but what it really changes is the way of thinking the algorithm. This solution is probably closer to Python than to C++, isn’t it? That’s because more blocks get available and composition plays an important role. This makes our way of thinking the algorithm closer to other paradigms. Again, this is an act of responsibility: choosing between – say – a for loop and a range-based solution can be disputable.

Are the inner_product and the ranges based solutions hard to understand? Definitely they are for programmers who don’t know standard idioms. It’s the same for map.insert that does not insert at all if the element is already there. Is std::map misleading? I know people who spend more time at dumping on standard things than the time they would need to learn standard things properly. Mastering language patterns is the key, and in the next posts I’ll show some patterns involving standard algorithms and containers.

In this series, first I’ll consider pre-C++17 solutions and whenever I can I’ll outline possible C++17 answers.

An invitation about ranges: please, post comments with your suggestions/corrections/opinions on my solutions because I think use cases are important both for Eric and for the whole C++ ecosystem.

 

What’s next

 

I don’t have a precise idea for the other posts yet. It’s possible I’ll discuss the usage of a certain standard algorithm in different scenarios, or how to deal with some kind of problems, or how to use a container to solve certain challenges, etc.

Definitely, since this series is about competitive programming, I’ll spend some posts on some peculiar aspects of challenges like paying attention to the constraints of the problem, or choosing the proper data type for the solution.

The next post will be kind of preliminary to the others. It’s basically about input and output.

Final note about code snippets

You probably noticed I used gist to render code snippets in this post. For the first time I needed to completely bypass WP: yesterday this post was broken because of snippets, with no apparent reason. I tried to fix them but after one hour of attempts I gave up. WP is the enemy of programming bloggers. I know, I should move to Jekyll or something better, but WP still has some advantages. For this reason I’m experimenting this “embedding Gist feature”. The procedure is a bit longer (create gist, paste code, give a sensible name, save, copy the URL into the post) but I think I would spend the same time at fixing WP troubles, so it’s ok for now.

A couple of weeks ago I found a simple bug in the dusty corners of a production repository I usually work on. A certain feature was rarely used and it seemed to be covered by a test…Actually it was not and some years ago a guy, feeling guarded by this test, changed the code bugging the feature, but nobody complained. Recently a user tried to resume this old feature but it didn’t work as expected.

I don’t want to bother you with all the details of the story but just to give you a bit of context: there was an old piece of code reading a small file through C FILE handles. Some years ago that piece of code was migrated to C++ streams (since files to read were really small and simple) and a silly bug was introduced. Since this bug was really simple I wondered if it was caused by inattention or ignorance, then I had a chat with the programmer who committed the change and I discovered it was caused by the latter reason. The fix was really easy.

Some days later I discussed about this problem with some friends and I realized they were unaware of this problem too. So I decided to write a short post about this story, maybe it is useful to other coders.

Imagine you are using streams to read some data from the standard input. This is what the input looks like:

number
some words in a line
number
some words in a line
...

And then imagine the following code reading that input:

int num; string line;
while ( (cin >> num) && getline(cin, line) )
; // something

Did you spot any problems?

If not, don’t worry, it’s a bit subtle.

Consider the invisible characters contained in the input stream:

10'\n'
some words'\n'
...'\n'

Actually this is not formally true on Windows, but in general you have a LF char at the end of each line. Let’s follow the code flow:

  • cin >> num correctly reads the int, stopping at (in the language of streams: “detecting but not consuming”) ‘\n’
  • getline(cin, line) now reads the next line until it encounters a line separator (‘\n’ by default). But ‘\n’ is still in the stream buffer and then getline returns immediately, storing nothing in line.
  • Again cin >> num is evaluated but this time it fails, because the stream is not fed with an int. failbit is set then. The loop terminates.
  • The user complains because the feature does not work as he expects. Ok, sorry this is not part of the code flow…

We just experienced a difference between operator>> and getline: the first skips any leading whitespace (actually any separator – according to the locale in use) before performing the read operation, instead, the second does not.

Basically, it has to do with the difference between formatted and unformatted input function. Stream operators (like operator>> for strings) belong to the former category, getline to the latter. In short – among other differences – formatted input functions skip leading separators (e.g. whitespaces, LF) by default, unformatted functions do not.

The long story is: both formatted and unformatted functions create basic_istream<CharT>::sentry objects for preparing input streams for I/O (e.g. checking the validity of the stream). One of the operations performed in the sentry constructor is skipping leading whitespaces. For deciding whether skipping or not it uses two information:

  • a bool parameter  passed to the constructor, that is false by default. Don’t get confused: it’s false when you want the sentry object to skip whitespaces (in fact, it’s generally called noskipws – e.g. _Noskip on Visual Studio).
  • ios_base::skipws flag (set or not on the stream object).

If _Noskip is false and the ios_base::skipws is true then leading whitespaces will be skipped.

I am sure you already imagine the rest of the story: when a formatted function creates a sentry, the parameter is left to its default value (false) and since cin‘s ios_base::skipws is true, operations like cin >> i work as expected even if some whitespaces stand in front of the int value. Conversely, unformatted functions create sentry instances by explicitly passing true to the constructor. For this reason the lonely leading ‘\n’ is not discarded.

[note]

Beware something about formatted/unformatted functions is changed between C++98 and C++11, in particular about istream& operator>>(streambuf*). In fact in C++98 it was a formatted operation, now it is unformatted.

[/note]

Why does getline preserve leading separators? I think because it’s kind of raw read operation. Note that if the delimiter is found, it is extracted and discarded (e.g. it is not stored and the next input operation will begin after it). This is important, because it enables such a code to work as expected:

stringstream ss("the first line\nthe second line)"
while (getline(ss, line)) 
{ ... // line does not contain '\n' at the end

How we fixed this issue?

The simplest thing you can do is just:

while ( (cin >> num >> std::ws) && getline(cin, line) )
;

The left hand side reads the int and then skips leading separators from the stream. std::ws is an input manipulator created for this purpose.

A bunch of other solutions are possible. For example the one using ignore:

while ( (cin >> num).ignore(numeric_limits<streamsize>::max(), '\n') && std::getline(cin, line))

Here we discard as many leading separators as possible, that is until either count characters are discarded, the delimiter (specified by the second parameter) is found or the end of the stream is reached.

Not only is the former solution simple and effective enough, but it also prevents oversights like:

10'\n'
'\n'
some words

Above the user left an empty line after the first number. The std::ws solution does not break, the ignore one does instead. On the other hand, std::ws solution does not preserve leading whitespaces, if you need them. But it was not our case (you can imagine the final code looked a bit more defensive than the one I showed here, but the main observations hold).

One can also develop a proxy object to allow code like this:

cin >> num >> std::ws >> as_line >> line;

as_line may also embody the std::ws part:

cin >> num >> as_line >> line;

It’s not hard to code such a machinery. For example:

struct lines_proxy
{
	istream& operator()(string& s)
	{
		return getline(is >> std::ws, s);
	}

	istream& is;
};

struct line_t {} as_line;

lines_proxy operator>>(istream& is, line_t)
{
	return{ is };
}

istream& operator>>(lines_proxy p, string& s)
{
	return p(s);
}

...

while (cin >> num >> as_line >> line)

Just for fun.

It’s over for today. The leading actor of our story was a really silly problem but the lesson learned was interesting: even if streams were designed as a “formatted” abstraction on top of I/O buffers, unformatted operations are still there. Mixing formatted and unformatted operations should be done with care.

This post is just a reminder to myself because I fell for it, again…Well, let me step back and explain the scenario.

Suppose we are using this generic approach for memoization:

template <typename R, typename... Args>
auto memoize(R(*fn)(Args...))
{
    std::map<std::tuple<Args...>, R> table;
    return [fn, table](Args... args) mutable -> R {
        auto argt = std::make_tuple(args...);
        auto memoized = table.find(argt);
        if(memoized == table.end())
        {
            auto result = fn(args...);
            table[argt] = result;
            return result;
        }
        else
        {
            return memoized->second;
        }
    };
}

Suppose also – at some point – two new requirements pop up:

  • ability to remove/update some entries of the table – because something changes somewhere else and those results become old,
  • ability to switch to other functions (the table is not needed there).

We decide to store the table and the lambda into a struct. For the sake of simplification, this is a specialized version of such a class using a toy-function simulate:

struct Memoization
{
    Memoization()
    {
        calc = [this](int i)
        {
            auto memoized = table.find(i);
            if(memoized == table.end()) {
                auto result = simulate(i); // <- somewhere
                table[i] = result;
                return result;
            } else {
                return memoized->second;
            }
        };
    }

    function<int(int)> calc;
    std::map<int, int> table;
};

Even if this design is probably silly, we can easily satisfy both the requirements (e.g. calc can be set to something else, and table is fully accessible).

This code is fine for rapid prototyping, so we don’t refactor it yet but instead we experiment a bit more. For example, we create a factory for creating different Memoization instances depending on some configuration. Each configuration merely results in a new core function to use (e.g. remember the second requirement). Since it’s prototyping, we leave the constructor as is and we instead set the function by hand:

Memoization CreateMemo(const Configuration& config)
{
   Memoization memo;
   // using config to create memo
   // e.g. memo.calc = ...
   return memo;
}

We try this code and we note that it behaves differently on two compilers: on Visual Studio 2013 we get a crash at some point while calling calc(), instead on clang it seems to work smoothly. We start debugging and we immediately spot what’s going on…Do you note that we are one step away from falling into a dangling reference problem? Actually, this accident happens on both clang and Visual Studio, but some (un)lucky condition makes this work on the former.

The culprit – in this case – is RVO, but the issue is…both capturing this into calc – that is a member variable – and copying/moving *this.

By capturing this, we have coupled calc to this->table, that won’t change anymore – say when you do move (or copy). this has not special treatment inside the callable object created by the lambda expression. It’s just a Memoization pointer and when the lambda gets moved (or copied), so does the pointer. Shallowly.

In our factory function, RVO is probably working a bit differently between clang and VS. VS is not using RVO here, clang instead uses RVO and the problem is apparently concealed. By disabling RVO on clang (e.g. -fno-elide-constructors), we get the same problem found on VS.

Did you get the point? In case of the original “memoized” version of calc (which captures this), after the move (or the copy), the returned Memoization instance has a reference to the local memo->table, not to the new one. Finally, the local temporary instance is destroyed and we get a dangling reference. This problem is subtle since the code could still “work” under certain conditions – e.g. RVO. It should be clear that copying instead of moving has the same effect.

Maintaining the original design, the problem can be quickly solved, for example by constructing the object directly:

class Memoization
{
public:
   Memoization()
   {
    // same as before
   }

   Memoization(function<int(int)> calcFn)
      : calc(calcFn)
   {
   // ...
   }
// ...
};

Memoization CreateMemo(const Configuration& config)
{
   if (config. ...)
      return {...} // won't copy nor move
   //...
}

But the main issue holds and this could lead to disaster:

Memoization m1;
auto m2 = m1; // [this] in m2.calc points to m1

Not only is it dangerous, but also wrong: you probably expect each Memoization instance has its own copy of the table – i.e. for this reason a solution – say – with shared ownership of the table doesn’t fit well.

At the end of the story we changed our design and we came up with another better solution. But this is not the main point of the article. Even if my example is probably goofy, this experience left a valuable lesson: capturing this into a member variable lambda is valid C++ code and may cause headache if we copy/move *this. Sometimes I think we have a duty to set some limits, for preventing traps other people could fall into. For this reason, I came to a couple of observations:

(You understand that moving instead of copying does not twist the meaning, so let me just mention copying so I do not need to write copy/move every time).

First, we usually don’t need to capture this into a member variable lambda at all. Exploring better ways is always more advisable.

Some of you could still complain that C++ is missing an opportunity by letting this behavior go undisturbed. You could expect the compiler magically sets the this pointer to the copied instance, don’t you? Honestly, I have no strong opinions on that, I’m just thinking aloud.

Second: we can judge pragmatically. Capturing this into a member variable lambda and copying *this just do not get along. Doing either is realistically better than adding some special treatment.

For this reason, I see a terse idiom: either capture this into a member variable lambda or copy *this (or neither).

This is eventually another subtle case the Rule of Zero does not cover and – in case you cannot live without capturing this – I think deleting copy and move operations in the host class is such a desirable design decision to apply (and to document?) – being understood that maybe you don’t need to capture this into a member variable lambda at all.

Last month Marco Foco and I facilitated a workshop on refactoring legacy C++ code. It was an improved version of the same workshop we presented at the Italian Agile Day in November, with Gianluca Padovani as well.

To give you a bit of context, some days before the attendees cloned a certain Git repository we indicated and they compiled the code (by using CMake to generate the projects in their environment) on their machines. The workshop was divided in 4 parts, each one focusing on a C++ theme. They were: productivity, memory management, algorithms and generic programming. During each part, we first spent 10 minutes explaining a few C++11/14 concepts and then we gave 25 minutes to work on some refactoring exercises. At the end of each part, a brief retrospective.

In this post I’m going to describe three pitfalls attendees fell into, related to initializer_list  at least at first sight.

The workshop code was a simple version of Yahtzee, the famous dice game. It was test-driven and, among others, we wrote a test suite covering the score calculation. For instance, suppose a player rolls the dice getting:

2 2 3 3 3

She gets a full house, or 25 points. A class is responsible for recognizing and calculating this kind of stuff. To roll the dice, another class is involved, a sort of “IDiceRoller”. It is an interface that provides only one function:


virtual void Roll(int(&dice)[5]) = 0;

In the test suite, we implemented a simple fake object (not a mock – that could be an exercise) to manipulate and control the dice. Imagine:


class FakeDiceRoller : public IDiceRoller
{
 public:
  FakeDiceRoller() { ... set _dice[i] = 1 ... } 
 
  void AssignDiceValues(int values[5]) { ... copy values to _dice ... }

  void Roll(int(&dice)[5]) { ... copy _dice to dice ... }
private:
  int _dice[5];
};

// in a test fixture

// FakeDiceRoller _roller is a member variable

int dice[5] = {1,1,2,3,4};
 _roller.AssignDiceValues(dice);

FakeDiceRoller had intentionally a poor interface and design. The point was: how could you improve it? Suppose you couldn’t change the interface of the domain interface IDiceRoller, as – likely – in the real life. The first series of exercises was about productivity. Sure, AssignDiceValues was one of the point we wanted participants to think about, At some point they gave a try:

_roller.AssignDiceValue({1,2,3,4,5});

They compiled and…they failed.

“Cannot convert initializer list argument to ‘int*'”.

People started trying to figure out why initializer_list was not covertible to int[]…

“This has nothing to do with initializer_list”. I stated. “This is the language and it’s saying the function parameter int values[5] is just int* dice, you cannot initialize int* from {1,2,3,4,5}”.

Then a gentleman took the floor and shouted “use the same signature as Roll, accepting a const reference to array instead of a non-const reference”. That was:

void AssignDiceValues(const int(&values)[5])
// usage
_roller.AssignDiceValue({1,2,3,4,5});

It was fine.

But we were more subtle. Now the attendees started merging two lines in one, getting code like the following:

_roller.AssignDiceValue({1,1,1,1});

Do you spot any problem here?

Now FakeDiceRoller‘s _dice is:

[1, 1, 1, 1, 0]

This is because the language zero initializes missing ints.

It happended the test at issue checked a poker of ones. And ok, we had four ones. It happended also the test was a bit wrong, because it didn’t check the 5th value of the dice (say it was a 3, the test had to check we scored a poker AND a 3). Who wrote the test made a mistake. C’est la vie.

Can our test environment prevent us doing this kind of imprudence? Now this simple requirement can be translated to a C++ exercise: we want to inline-initialize an array of strictly N elements. In short, this should fail under our conditions:

void AssignDiceValues(const int(&values)[5]);
_roller.AssignDiceValues({1,2,3,4}); // missing last die

How can this be done? I give you 3 attempts. Other solutions are possible, here I want the simplest approaches possible. The good news was that many people at the workshop suggested the last, that I think is the best.

Attempt #1: initializer_list

Since initializer_list contents is sculpted in the code – at compile-time – its .size() function should be constexpr, shouldn’t it? Yes, but from C++14:

void AssignDiceValue(initializer_list<int> values)
{
   static_assert(values.size() == 5, "Please provide exactly 5 values"); // only from C++14
   copy(values.begin(), values.end(), _dice);
}

Actually this doesn’t work yet, as a reader commented.

Attempt #2: strict_array

template<typename T, size_t N>
struct strict_array : array<T, N>
{
   template<typename... V>
   strict_array(V... vals) // no &&/forward to simplify
      : array<T, N>( {vals...} )
  {
     static_assert(sizeof...(vals) == N, "Please provide exactly 5 values");
   }
};

void AssignDiceValues(const strict_array<int, 5>& values);
_roller.AssignDiceValues({1,2,3,4,5}); // ok
_roller.AssignDiceValues({1,2,3,4}); // static_assert fires
_roller.AssignDiceValues({1,2,3,4,5,6}); // static_assert fires and array<int, 5> constructor complains

Attempt #3: just the language

We don’t want to add complexity to my framework. We don’t need static_assert nor new bizarre array types. We can use the language and bring my requirement out by design. Just needing to add a tiny level of abstraction:

class Die
{
   int value;
public:
  Die(int val) : value(val) {} // mandatory
  // operator int() { return value; } // if really needed
}

void AssignDiceValues(const array<Die, 5>& values);
_roller.AssignDiceValues({1,2,3,4,5}); // ok
_roller.AssignDiceValues({1,2,3,4}); // ko
_roller.AssignDiceValues({1,2,3,4,5,6}); // ko

This is the solution I like the most. It’s a design decision.

Two main points about initializer_list to remember when you refactor legacy “initialization” code:

  • int arr[] is int*. Don’t expect the language to magically deduce an initializer_list
  • initializer_list‘s size() is constexpr only from C++14.

Next. Another task where initialization was involved regarded the game configuration: a game could be configured with a few options. Since the codebase was an hybrid of old C++ and modern C++, a tuple was employed.

YathzeeGame game ( make_tuple(5, 6, 2) ); // 5 dice, [1..6] values, 2 players

A gentleman spotted the following in the dark corners of the codebase:

vector<YahtzeeGame> games;
games.push_back(make_tuple(5, 6, 2));
games.push_back(make_tuple(5, 6, 3));
games.push_back(make_tuple(5, 6, 4));
// other stuff

Excited about C++11, he tried to refactor:

vector<YahtzeeGame> games = { {5, 6, 2}, {5, 6, 3}, {5, 6, 4} };

And does it compile?

Yes!

No.

I’m kidding you!

I rephrase: do the following statements compile?


YahtzeeGame game { 5, 6, 2 }; or YahtzeeGame game { {5, 6, 2} };

No, they don’t neither. Some participants asked “why is not initializer_list supported here?”.

“initializer_list is not guilty”, I replied. First: how can one expect an initializer_list to be used to contstruct a tuple? initializer_list is – by-definition – homogeneous! Just the opposite of tuple. tuple should have a constructor taking…initializer_list<?>. Some people started likening tuple to pair: “I can do it with pair”.

Yes, you can do with pair because the real reason is tuple’s constructor, that is explicit, and – as you know – copy initialization considers only non-explicit constructors. That is, you can do:


tuple<int, string, foo> t {10, "hello", {fooArg1, fooArg2}};

But not:


tuple<int, string, foo> t = {10, "hello", {fooArg1, fooArg2}};

Nor:


tuple<int, string, foo> make_my_tuple() {

   return {10, "hello", {fooArg1, fooArg2}};

}

So you may refactor the initial code by adding make_tuple:

vector<YahtzeeGame> games = { make_tuple(5, 6, 2), make_tuple(5, 6, 3), make_tuple(5, 6, 4) };

Also here initializer_list was in the clear. When something is wrong with initialization, since curly braces initialization (aka uniform initialization) and initializer_list share the same syntax, and since almost all the standard containers support initializer_list construction, someone could jab at this type. As a reader commented, N4387 proposes (among other stuff) getting rid of this limitation.

The third and last example is another story.

To calculate the scores, a class with a CalculateScores function was provided. This function was monolithic, imagine a big if cascade:

if (...single dice...) {
...
}
if (...pair dice...) {
...
}
if (...tris dice...) {
...
}
if (...poker...) {
...
}
etc.

We proposed to decouple this function and make it modular. This way one can create several versions of the game, for example one with no special points (e.g. no full, poker, straight), another with extra points, etc. People designed a simple IRule interface, providing a function:

virtual void Apply(const GameState& state, ScoreTable& scores) = 0;

ScoreTable was already in the code and it just stored the results of the calculation. The idea was to apply rules in chain. Straightforward.

A funny anecdote: at some point I asked “how can you improve this if cascade?”. One person replied “we can use a switch-case”. I responded: “yes but…it’s pretty much the same. What can we do from a design point of view to make this code more modular?” Another guy said “we can design an interface and several concrete classes”. And suddenly the person who proposed the switch-case got up and left the room! Ouch…is an interface so bad?!

No more chatting! People coded this interface, created the rules and…they had to store them somewhere. They opted for a vector of unique_ptrs:

vector<unique_ptr<IRule>> rules;

And they serenely wrote:

vector<unique_ptr<IRule>> rules = { make_unique<Single>(), make_unique<Double>(), make_unique<Tris>(), make_unique<Full>(), ... };

And they got impatient for testing their code, having unit tests from their side – contrary to what they have at work🙂

I felt a tremor in the force…

“Noooo. Another compiler error”😦

Said desperate programmers whining from the trenches.

“call to deleted constructor of ‘std::unique_ptr<Single, std::default_delete<Single> >”

“What the fuck?” Some of them kindly complained!

This time, they really made initializer_list fell guilty. And this time they were right. initializer_list doesn’t support move-only types. The main reason is that its begin() and end() return const pointers. There is a proposal to address this issue and several smart guys advanced their idioms – for example here.

As before, I wanted a simple solution for my modern C++ novices, to let them play and experience with C++11/14. I seized the moment: “guys, let’s do a simple exercise with variadics “:

auto rules = CreateVector<unique_ptr<IRule>>( make_unique<Single>, make_unique<Double>(), ... );

The idea was very simple and so was the implementation:

template<typename T, typename H>
void CreateVectorImpl(vector<T>& v, H&& single)
{
   v.emplace_back(forward<H>(single));
}

template<typename T, typename H, typename... Tail>
void CreateVectorImpl(vector<T>& v, H&& head, Tail&&... tail)
{
   v.emplace_back(forward<H>(head));
   CreateVectorImpl(v, forward<Tail>(tail)...);
}

template<typename T, typename... Tail>
vector<T> CreateVector(Tail&&... tail)
{
   vector<T> v;
   CreateVectorImpl(v, forward<Tail>(tail)...);
   return v;
}

Alessandro Vergani (who were there to help us) sent me this (specific – but slick) solution:

template <typename Type>
void setup_rules(vector<unique_ptr<IRule>>& v)
{
   v.emplace_back(make_unique<Type>());
}

template <typename Type, typename Type2, typename... OtherTypes>
void setup_rules(vector<unique_ptr<IRule>>& v)
{
   v.emplace_back(make_unique<Type>());
   setup_rules<Type2, OtherTypes...>(v);
}

template<typename... Types>
vector<unique_ptr<IRule>> CreateRules()
{
   vector<unique_ptr<IRule>> rules;
   setup_rules<Types...>(rules);
   return rules;
}

// usage: auto rules = CreateRules<Single, Double, Poker>();

Wrapping up the story, I believe people at the workshop tried to burden initializer_list too much. They got errors on something related to initialization with curly braces and they accused initializer_list. In the first case, the main misunderstanding was related to the language itself: int[] is just int*, in C++11 as in C++98. Rectifying was quite simple, by using a const reference to an array. initializer_list doesn’t have to do with that, not even here. It’s the language. And just by using the language we addressed the other requirement about prohibiting “uninitialized” dice. Here, some people thought they could just static_assert initializer_list’s size. I deem it’s not worth.

At first sight, the second case is even more related to initializer_list, because every container is constructible from initializer_list. Why tuple differs? If people don’t think about the mathematical difference between initializer_list (aka: homogeneous) and tuple (aka: heterogeneous) they can fall into a trap. Pair is the same story, but curly braces are because of uniform initialization. And pair’s  constructor is not explicit, thus copy initialzation is possible and the trap is just veiled.

In the last example, initializer_list tried to escape through the window, but this time it couldn’t. Imagine initializer_list as arrays, globally stored somewhere. Even if they are (maybe) used only once (in the line you perform the initialization), the compiler is solely responsible for their state. We know there are many workarounds but I’d really like having an official feature in the standard to address this issue (e.g. N4166).