A common programming puzzle consists in printing all rotations of a string. For example, all rotations of abc are:

abc
bca
cab

Intuitively, every rotation is a substring of the original string concatenated with itself (last character excluded):

abc + ab 
= abcab
  abc
   bca
    cab

Another example:

input: abcdefgh

abcdefghabcdefg
abcdefgh
 bcdefgha
  cdefghab
   defghabc
    efghabcd
     fghabcde
      ghabcdef
       habcdefg

Indeed, the number of rotations is equal to the length of the string.

The pattern above could be mimicked with ranges quite easily.

We first create a temporary string as the concatenation of the original string with itself excluding the last character:

std::string input = "abcd";
const auto len = size(input);
auto tmp = input + input.substr(0, len - 1); // abcdabc

And then we print all the sliding windows of dimension len.

That’s a views::sliding!

std::string input = "abcd";
const auto len = size(input);
auto tmp = input + input.substr(0, len - 1);
std::cout << (tmp | views::sliding(len));

An alternative solution to avoid the temporary string consists in generating every rotation lazily.

Thinking range-fully, the string concatenated with itself can be seen as an application of views::cycle. We can still use views::sliding but we must take only len windows.

Let me drive you towards the solution.

First of all, we create the “infinite” string:

std::string input = "abcd";
const auto len = size(input);
input | views::cycle; // ['a', 'b', 'c', 'd', 'a', 'b', 'c', ... 

As before, now we apply sliding:

std::string input = "abcd";
const auto len = size(input);
input | views::cycle // ['a', 'b', 'c', 'd', 'a', 'b', 'c', ... ]
      | views::sliding(len); // [ ['a', 'b', 'c', 'd'], ['b', 'c', 'd', 'a'], ['c', 'd', 'a', 'b'] ... 

Finally, we take only (the first) len windows:

std::string input = "abcd";
const auto len = size(input);
auto rotations= 
  input | views::cycle // ['a', 'b', 'c', 'd', 'a', 'b', 'c', ... ]
        | views::sliding(len) // [ ['a', 'b', 'c', 'd'], ['b', 'c', 'd', 'a'], ['c', 'd', 'a', 'b'] ... 
        | views::take(len); // [ ['a', 'b', 'c', 'd'], ['b', 'c', 'd', 'a'], ['c', 'd', 'a', 'b'] , ['d', 'a', 'b', 'c'] ]

std::cout << rotations;

To conclude, let me make a quick digression on views::cycle.

views::cycle repeats the elements of a range cyclically:

views::cycle(views::single(1)); // [1, 1, 1, ... 

std::vector v = {1,2,3,4};
views::cycle(c); // [1, 2, 3, 4, 1, 2, 3, 4, 1, ... 

views::cycle makes an endless range from an input range, but it doesn’t change its cardinality.

A common question is how to repeat the entire range as-is, not the content. For example:

std::vector v = {1,2,3,4};
views::cycle( ??? ); // [ [1,2,3,4], [1,2,3,4], [1,2,3,4], ... 

Think about it for a moment.

To solve the puzzle, just remember that cycle repeats the elements of the input range. The trick now is to create a range containing the vector as the only element.

That’s a views::single!

std::vector v = {1,2,3,4};
views::cycle( views::single(v) );

However, this might be “inconvenient” because cycle repeats the std::vector as-is but sometimes we want the content of the vector instead:

std::vector v = {1,2,3,4};
std::cout << views::cycle( views::single(v) ); // no way

Here is a visualization of cycle:

views::cycle(views::single(1)); // [1, 1, 1, ... 

std::vector v = {1,2,3,4};
views::cycle(c); // [1, 2, 3, 4, 1, 2, 3, 4, 1, ... 

views::cycle(views::single(v)); // [v, v, v, v, ... 

However, we might want to generate the following, instead:

std::vector v = {1,2,3,4};
views::cycle( views::single( ??? ) ); // [ [1,2,3,4], [1,2,3,4], [1,2,3,4] ... 

Now the question is: how to turn such a vector into the range of its elements?!

That’s a views::all!

std::vector v = {1,2,3,4};
std::cout << views::all(v); // [1, 2, 3, 4]

views::all simply turns every range into the range of its elements. Basically, applying all to a container “erases” the type of the container. So, the answer to the puzzle is:

std::vector v = {1,2,3,4};
views::cycle( views::single( views::all(v) ) ); // [ [1,2,3,4], [1,2,3,4], [1,2,3,4] ... 

Indeed, views::all is often used for converting containers to ranges.

Edit: in a private conversation, Ruzena Gurkaynak sent an alternative (and more concise) way for getting the same result as above:

std::vector v = {1,2,3,4};
views::repeat(views::all(v)); // [ [1,2,3,4], [1,2,3,4], [1,2,3,4] ... 

Just a closing anecdote: when I started writing this blog post, my solution to the problem discussed was a bit different and more convoluted. While putting the idea into writing, I realized the puzzle could be solved more easily. If you are interested, here was my initial thought:

std::string input = "abcd";
const auto len = size(input);
auto cycled  = input | views::cycle;
auto rotations = views::iota(0u, len) | views::transform([=](auto i) {
   return cycled | views::drop(i) | views::take(len);
});

That’s all folks!

I hope you have enjoyed the article and please let me know your thoughts in the comments section below.

In this post I share the story of a “C++ aha-moment”, hence I use the hashtag #thatsarotate (resumed last Saturday at the Italian C++ Conference 2021 to honor our keynote speaker Sean Parent). Why not sharing your own stories of “C++ aha-moments” adding the same hashtag to your tweets, posts, or whatever? It would be really appreciated.

A classical coding puzzle states: “reverse the order of the words of a string”.

For example:

reverse words in this string

Should become:

string this in words reverse

This problem could be solved either in-place or using additional storage.

Think about it for a while, if you like.

A common in-place solution consists in reversing the string as-is and then reversing the individual words:

reverse words in this string

=> 

gnirts siht ni sdrow esrever

=>

string this in words reverse

Alternative solutions exist (also variants of the same puzzle with some additional requirements on whitespaces), yet my focus here is not discussing about them. Rather, I would like to show you a possible interpretation of the idea with C++ ranges. Well, not really C++20 ranges but range-v3 instead (hoping that most of the missing pieces will be merged into C++23).

A simple C++23 solution

When I applied ranges on this problem for the first time, I tried the following solution but range-v3 misled me, so I abandoned the idea. A reader commented on Reddit that this one works:

auto output = input | reverse | split(' ') | transform(reverse);

The code above has actually a caveat: it “consumes” all the delimiters. On the other hand, the solution(s) I am going to show you work for all kinds of spacing and delimiters.

Actually, as another reader commented on Reddit, the code above exploits a library feature of C++23 and it’s not currently supported by most of the available compilers.

I would like to thank the people who commented on Reddit and digged into such details. I have learned something totally new.

Why not just combining split and reverse?

We might intuitively think of jotting down a solution like this:

std::string input = "reverse words in this string";
reverse(input); // in-place reverse
auto reversed = input | views::split(' ') | views::reverse;

Unfortunately, that would not compile: split and reverse are not composable and that’s a known design decision. Basically, split produces at most a forward range, but reverse requires at least a bidirectional range. Read more details on this nicely tailored article by Walletfox.

Apropos, did you know we have 25% discount until the end of July on Walletfox’s awesome manual about ranges? Just drop this coupon code on the payment form: ITALIANCPP25. I have personally practiced ranges with this book and it was definitely worth it.

When I tried working around the split/reverse problem to solve this puzzle, I got stuck for a while:

std::string input = "reverse words in this string";
reverse(input); // gnirts siht ni sdrow esrever
// now what?

The problem remained unsolved for some time… Suddenly, when I was doing something totally unrelated, I got an insight.

“That’s a rotate!”

Well, it’s not really a std::rotate, but just a C++ aha-moment!

Let me drive you towards my intuition.

What’s in a word, after all?

Let’s get back to this code:

std::string input = "reverse words in this string";
reverse(input); // gnirts siht ni sdrow esrever

The continuation of the solution is based on the following intuition: “words” are made of characters of the same “kind”. That is:

  • alphabetical characters only
  • non-alphabetical characters only (e.g. whitespaces)

Applying this idea, the example string could be divided into these chunks:

["gnirts", " ", "siht", " ", "ni", " ", "sdrow", " ", "esrever"]

Reversing the individual sub-ranges here would be straightforward (and reversing sub-ranges containing whitespaces doesn’t really matter).

To create such a range, we could think of what this operation is really doing. It iterates over the elements one by one, and it puts together all those sharing the same “alphabeticalness”. Anytime it finds one which does not, it makes a new “chunk”. In contrast to split, this one does not really “eats” characters. Every chunk might be reversed afterwards since, intuitively, it just references a portion of the data.

For example:

gnirts siht ni sdrow esrever
^
 ^
  ^
   ^
    ^
     ^
      ^ <-- the first chunk ends here!

Checking if any two characters have the same “alphabeticalness” is quite straightforward:

[](auto c1, auto c2) {
    return isalpha(c1) == isalpha(c2);
};

Note that you might replace isalpha with isalnum to add numbers to the party.

As a reader commented, to be more pedantic, we should explicitly use unsigned char as arguments:

[](unsigned char c1, unsigned char c2) {
    return isalpha(c1) == isalpha(c2);
};

However, since the domain of the string only for this problem contains letters and whitespaces only, the representation is always safe.

In addition, we have a comment by Alessandro “Loghorn” Vergani on Twitter:

isalpha returns an int: 0 if the character is not alphabetical, any other number if it is. So, your comparison could fail even if two consecutive chars are both alphabeticals.

So we might change the predicate this way:

[](auto c1, auto c2) {
    return isalpha(c1) && isalpha(c2);
}

Since all the compilers I tried work with the first version of the predicate, I will leave it for the rest of the article. Bear in mind the clever comment by Alessandro, though.

That function returns true only if c1 and c2 are both alphabetical or non-alphabetical. Mixing alphabetical with non-alphabetical returns false, and this is expected.

Now what? Do we have any views that can help here group the range of characters by that predicate?

Well, this is a sort of auto-answering question ūüôā

That’s a group_by!

std::string input = "reverse words in this string";
reverse(input); // gnirts siht ni sdrow esrever
auto output = input | views::group_by([](auto c1, auto c2) {
    return isalpha(c1) == isalpha(c2);
}) | ...

group_by is one of those patterns that are relatively new for C++. I mean that group_by does not imitate any of the existing STL algorithms. In contrast to others like, for example, views::partial_sum that expresses the same intentions as std::partial_sum.

To be precise, the resulting range will look like this:

[ ['g', 'n', 'i', 'r', 't', 's'], [' '], ['s', 'i', 'h', 't'] ... ]

group_by is a very powerful tool: it takes a binary predicate, (T, T) -> bool, and invokes this predicate on consecutive elements, starting a new group when that predicate returns false. Actually, group_by takes a shortcut: the left hand side of the check is always the first element of the consecutive range of elements. Here, it starts from the first element (g) and finds the first character for which the predicate does not evaluate to true:

gnirts siht ni sdrow esrever
^^
^ ^
^  ^
^   ^
^    ^
^     ^ <- found!

When the predicate checks g and ' ' , it returns false and then group_by yields a new group (sub-range). Then it finds the predicate is false again on ' ' and s, leading to a sub-range consisting only of a single ' '. Next, it finds 'siht' , and so on.

Moreover, we could catch sight of group_by if we implemented the solution of the puzzle in a more classical C++ way:

std::reverse(begin(s), end(s));
auto head = begin(s);
while (head != end(s))
{
	auto chunkIt = std::find_if_not(head, end(s), [=](auto i) {
		return isalpha(*head) == isalpha(i);
	});
	std::reverse(head, chunkIt);
	head = chunkIt;
}

Indeed, group_by (just the idea, not the view), could be implemented as follows (roughly tested):

auto head = begin(rng);
while (head != end(rng))
{
	auto chunkIt = std::find_if_not(head, end(rng), [=](auto i) {
		return pred(*head, i);
	});
	//... the sub-range is [head, chunkIt) ...
	head = chunkIt;
}

After all, the common pattern emerging from the two snippets above could be considered a form of the two-pointer technique.

Then, we need to reverse each group. Since we need to do an operation for each element of the range, we could call views::for_each for rescue:

std::string input = "reverse words in this string";
reverse(input); // gnirts siht ni sdrow esrever
auto output = 
 input | views::group_by([](auto c1, auto c2) {
            return isalpha(c1) == isalpha(c2);
         })  // [['g', 'n', 'i', 'r', 't', 's'], [' '], ['s', 'i', 'h', 't'], ...]
       | views::for_each(views::reverse_fn{}); // ['s', 't', 'r', 'i', 'n', 'g', ' ', ...]

That’s it! But…

The laziness is strong with this one

Feeling happy came after I compiled and ran the snippet above for the first time. However, just after pressing the launch button, I realized something more: the first string reverse could be done lazily:

std::string input = "reverse words in this string";
auto output = views::reverse(input) 
       | views::group_by([](auto c1, auto c2) {
            return isalpha(c1) == isalpha(c2);
         })  // [['g', 'n', 'i', 'r', 't', 's'], [' '], ['s', 'i', 'h', 't'], ...]
       | views::for_each(views::reverse_fn{}); // ['s', 't', 'r', 'i', 'n', 'g', ' ', ...]

std::cout << output;

Let’s see what is going on here:

  • for_each will apply reverse_fn on one element and “pulls” such an element from the block behind,
    • then group_by is asked for one element. One element here means one sub-range, made of consecutive elements taken until the predicate evaluates to false,
      • such elements are taken from the step behind that is reverse that takes elements from input starting from the back: g, n, i, r, t, s, ' ', s, i, ...
    • group_by then can make the first sub-range [g, n, i, r, t, s]. That sub-range is given back to for_each,
  • finally, for_each applies reverse_fn on what group_by returned, getting back the letters s, t, r, i, n, g
  • Bear in mind that for_each joins all the results together and thus the final range is flattened ([s, t, r, i, n, g, , t, h, i, s, , i, n, ...]). Try replacing for_each with transform and spot the differences!

This leads to the end of the story.

However, If you take a closer look, you will see that the range-based solution can’t really replace the in-place solution. Well, this educational post aimed just to reinterpret the idea with ranges, explaining a possible approach. This is not necessarily a replacement.

But I could push a bit more and show you that thinking range-fully could be useful for solving the problem in-place too. We are not so far from it, after all.

Hopefully, this could be educational too.

A down-to-earth in-place implementation

Let me start from here:

std::string input = "reverse words in this string";
reverse(input); // gnirts siht ni sdrow esrever

At this point, even though this affirmation does not make any sense at all, I would like to call actions::reverse (or ranges::reverse) on each of the sub-ranges produced by group_by.

Actually, my desire is more realistic if we see every sub-range as a slice of the original string. Each slice begins where the previous one ends. The very first slice begins at 0, obviously. You can imagine to use views::slice to take such portions [lower bound, upper bound) of the input range:

gnirts siht ni sdrow esrever
0     6

views::slice(input, 0, 6) => gnirts
views::slice(input, 6, 7) => ' '
...

Reversing each slice is easy:

actions::reverse(views::slice(input, 0, 6));
actions::reverse(views::slice(input, 6, 7));
...

The problem could be turned into the problem of producing the range of each slice bounds.

Think about it for a while.

To do so, first of all, we transform each sub-range produced by group_by to its length:

auto groups = input | views::group_by([](auto c1, auto c2) {
        return isalpha(c1) == isalpha(c2);
});
auto bounds = groups | views::transform([](auto g) { return size(g); })
                     | ...

For example:

gnirts siht ni sdrow esrever
[ ['g', 'n', 'i', 'r', 't', 's'], [' '], ['s', 'i', 'h', 't'] ... ]
=> [6, 1, 4, 1, 2, 1, 5, 1, 7]

Now look: the first slice gnirts goes from 0 to 6 (exclusive), the second one (only containing the first whitespace) goes to 6 to 6 + 1 (exclusive), the third one (siht) goes from 6 + 1 to 6 + 1 + 4 (exclusive),…do you see the pattern?

That’s a partial_sum!

auto groups = input | views::group_by([](auto c1, auto c2) {
        return isalpha(c1) == isalpha(c2);
});
auto bounds = groups | views::transform([](auto g) { return size(g); })
                     | views::partial_sum;

This leads to:

gnirts siht ni sdrow esrever
[ ['g', 'n', 'i', 'r', 't', 's'], [' '], ['s', 'i', 'h', 't'] ... ]
=> [6, 1, 4,  1,  2,  1,  5,  1,  7]
=> [6, 7, 11, 12, 14, 15, 20, 21, 27]

The very first 0 is missing. Let’s push it at front:

auto groups = input | views::group_by([](auto c1, auto c2) {
        return isalpha(c1) == isalpha(c2);
});
auto bounds = views::concat(
               views::single(0),
               groups | views::transform([](auto g) { return size(g); }))
              ) | views::partial_sum;

Easy peasy:

gnirts siht ni sdrow esrever
[ ['g', 'n', 'i', 'r', 't', 's'], [' '], ['s', 'i', 'h', 't'] ... ]
=> [0, 6, 1, 4,  1,  2,  1,  5,  1,  7]
=> [0, 6, 7, 11, 12, 14, 15, 20, 21, 27]

We are almost there. Now we need to turn this range of numbers into [lower bound, upper bound) sub-ranges. Basically, we need to create the range of 2-element sliding windows on top of the current range. After making such windows, we could create the corresponding string slices and apply actions::reverse on each. Here is the expected range of windows:

gnirts siht ni sdrow esrever
[ ['g', 'n', 'i', 'r', 't', 's'], [' '], ['s', 'i', 'h', 't'] ... ]
=> [0, 6, 1, 4,  1,  2,  1,  5, 1, 7]
=> [[0, 6], [6, 7], [7, 11], [11, 12], [12, 14], [14, 15], [15, 20], [20, 21], [21, 27]]

Indeed, views::slice(input, 0, 6) yields exactly ['g', 'n', 'i', 'r', 't', 's'], views::slice(input, 6, 7) yields [' '], and so on.

One possible way to get this range consists in using views::sliding(2):

auto bounds = views::concat(
                views::single(0),
                groups | views::transform([](auto g) { return size(g); })
              ) | views::partial_sum
                | views::sliding(2);

Then we can iterate this range and reverse each slice of the string:

for (auto window : bounds)
    actions::reverse(views::slice(input, *begin(window), *++begin(window)));

std::cout << input;

Here is the full code:

std::string input = "reverse words in this string";
reverse(input); // gnirts siht ni sdrow esrever

auto groups = input | views::group_by([](auto c1, auto c2) {
        return isalpha(c1) == isalpha(c2);
});

auto bounds = views::concat(
                views::single(0),
                groups | views::transform([](auto g) { return size(g); })
              ) | views::partial_sum
                | views::sliding(2);

for (auto window : bounds)
    actions::reverse(views::slice(input, *begin(window), *++begin(window)));

std::cout << input;

Another option to obtain the windows is using views::zip:

auto bounds = views::concat(
                views::single(0),
                groups | views::transform([](auto g) { return size(g); })
              ) | views::partial_sum;
    
for (auto [lb, ub] : views::zip(bounds, views::drop(bounds, 1)))
	actions::reverse(views::slice(input, lb, ub));

std::cout << input;

Note that the latter is causing bounds to be “evaluated” twice (one is because of views::drop).

After reading this post, Ruzena G√ľrkaynak (the author of Fully Functional C++ with Range-v3) made a great suggestion: we might zip with views::exclusive_scan instead of inserting 0 at front:

auto lengths = groups | views::transform([](auto g) { return size(g); });

auto intervals = views::zip( lengths | views::exclusive_scan(0),
                             lengths | views::partial_sum);
    
for (auto [lb, ub] : intervals)
	actions::reverse(views::slice(input, lb, ub));

std::cout << input;

That’s it folks! I won’t go further discussing but I would be very glad to hear your thoughts on this. How can we improve the snippets above? Which problems do you see here? Let’s start a conversation in the comments section below.

This is an educational post and it doesn’t claim to replace the original solution. It’s just a tale about having fun with ranges and patterns. I hope you enjoyed the journey and many thanks for getting here!

What about sharing your own #thatsarotate stories?

Some months ago, I faced this problem for the first time:

Given an array of integers A sorted in non-decreasing order, return an array of the squares of each number, also in sorted non-decreasing order.

For example:

-4 -2 0 1 5

The result array is:

0 1 4 16 25

The naive solution consists first in squaring each element and then sorting the whole array:

for (auto& e : v)
e *= e;
sort(begin(v), end(v));

We can actually replace the loop with a call to std::transform:

transform(begin(v), end(v), begin(v), [](auto e) { return e*e; });
sort(begin(v), end(v));

This code was also shown by my friend Conor Hoekstra at Meeting C++ 2019. Side note: Conor is doing a great job with showing coding patterns applied to programming challenges. If you have appreciated my recent posts on patterns and algorithms, I heartly recommend you to follow Conor’s channel.

Just for fun, the same result can be written by using a temporary ordered container like std::multiset:

multiset<int> s;
transform(begin(v), end(v), inserter(s, end(s)), [](int e){ return e*e; });
v.assign(s.begin(), s.end());

Although the solutions above are very elegant (and generic), we pay the cost of sorting.

Can we do better?

Often, to answer such questions we need to look at and exploit the constraints of the problem (other personal thoughts on this topic).

First of all, consider that we have to return a new vector. Thus, we can assume we can use an extra vector for the output.

Moreover, we know the array is sorted in non-decreasing order. We take advantage of this information.

In fact, the array contains first a negative part and then a non-negative part (either might be empty).

We can find the first non-negative value and then merge the two parts together by repeatedly appending to the output vector the square of the lowest value:

vector<int> sortedSquares(vector<int>& A)
{
vector<int> res(A.size());
// locate the first positive value
auto it = find_if(begin(A), end(A), [](int i) { return i>= 0; });
auto pos = (int)distance(begin(A), it);
auto neg = pos - 1;

int len = (int)A.size();
for (auto i = 0; i < len; ++i)
{
// negative values over
if (neg < 0)
{
res[i] = square(A[pos++]);
}
// positive values over
else if (pos >= len)
{
res[i] = square(A[neg--]);
}
// positive value is bigger
else if (abs(A[pos]) > abs(A[neg]))
{
res[i] = square(A[neg--]);
}
// negative value is bigger
else
{
res[i] = square(A[pos++]);
}
}
return res;
}

The solution above is linear in both time and in space – actually, we can assume that the extra space is wanted by the problem itself.

Another version of this approach does only one scan of the array. Basically, we can avoid calling find_if first. The key observation is that v[0] is the “biggest” (in absolute value) among the negative elements, whereas v[last] is the biggest among the positives. Thus, we can merge the two parts together going in the opposite direction:

vector<int> sortedSquares(vector<int>& A)
{
vector<int> ret(A.size());
int r=A.size()-1,l=0;
for(int i=r;i>=0;i--)
{
if(abs(A[r])>abs(A[l]))
ret[i]=A[r]*A[r--];
else
ret[i]=A[l]*A[l++];
}
return ret;
}

Before I set this problem at Coding Gym for the first time, I couldn’t find any other solution or pattern. Moreover, I had not answered the question “is it possible to solve this problem efficiently without using extra space?” neither.

Here is when the tale really begins.

I set the problem at Coding Gym and, during the retrospective, the attendees share some solutions very similar to those I have just presented here. Different languages but same approach.

At some point, Eduard ‚ÄúEddie‚ÄĚ Rubio Cholbi and Stefano Ligabue present another solution. They have solved the problem in many ways already, embracing one of the key principles of Coding Gym that states “every problem has value”. After some iterations, they end up with the following Python snippet:

def squaresOfSortedArray(values):
positive = []
negative = []

for v in values:
(negative, positive)[v >= 0].append(v**2)

return merge(reversed(negative), positive)

Note: I know, the name “positive” is misleading (0 is included). I use it anyway for the rest of the article. Please be patient.

They explain the solution that I can recap as follows:

  • first of all, they split the vector into two lists (negative and positive) by using some Pythonian syntactic sugar
  • each value is squared just before being appended to the corresponding list
  • finally, the negative list is reversed and merged with the list of positives

The key here is getting that merge does the main work that my long C++ snippet does. After all, the result is just the sorted merge of the two lists.

Here is an example:

-4 -2 0 1 5

negative: [-4*-4, -2*-2]
positive: [0, 1, 25]

merge(reverse([16, 4]), [0, 1, 25])
=> merge([4, 16], [0, 1, 25])
=> [0, 1, 4, 16, 25]

After they show the code, I am left speechless for some seconds.

Then some patterns starts popping up in my brain:

merge and reverse are very easy to see. I can also see an application of map (transform) that squares each value.

The hardest pattern to see is hidden behind the foor loop.

Suddenly I can see it very clearly: it’s a way to partition the vector into two parts.

It’s a partition_copy.

This is an insight: a quick and sudden discovery. Sudden means that you do not get any premonition about that. The solution – the aha! moment – just turns up from your unconscious mind to your conscious mind. Anything can literally trigger an insight.

In this case, the insight has been triggered by looking at that Python snippet Eddie and Stefano presented.

Getting contaminated by other people’s ideas is for me very important. That’s why I have developed Coding Gym, after all.

After looking at the Python snippet, I am champing at the bit to turn it into C++. It’s late, though, and I want to enjoy the moment.

I go to sleep and the day after I wake up a bit earlier than usual, go to the office and allocate some time before work to turn Eddie and Stefano’s code into C++.

First of all, I translate Python into C++ word by word:

vector<int> negatives, positives, res(v.size());
partition_copy(begin(v), end(v), back_inserter(negatives), back_inserter(positives), [](auto i) { return i<0; });
reverse(begin(negatives), end(negatives));
transform(begin(negatives), end(negatives), begin(negatives), [](auto i){ return i*i; });
transform(begin(positives), end(positives), begin(positives), [](auto i){ return i*i; });
merge(begin(negatives), end(negatives), begin(positives), end(positives), begin(res));

Got it! I cheerfully think. It was not so hard. Indeed, solutions via insight have been proven to be more accurate than non-insight solutions.

The best is yet to come, actually.

I have another insight. I can see the solution that does not make any usage of the support vectors (negatives and positives). Going back to the basics of the standard library, I think of using iterators instead. The two parts should be just two ranges.

I think again at my solution with find_if, since the partition point is just found there:

auto pos = find_if(begin(v), end(v), [](auto e) { return e>=0; });

pos is the beginning of the positive range. negatives is obtained by going from pos to the beginning in the opposite direction.

Can you see it?

Here is an example:

-4 -2 0 1 5
^ pos

Here are the ranges:

-4 -2 0 1 5
^___^        NEGATIVES
^___^  POSITIVES

After finding the partition point, squaring all the elements is still needed:

transform(begin(v), end(v), begin(v), [](auto i){return i*i;});

Finally, the magic, the tribute to the flexibility of the standard library, my insight turned into C++ bits:

merge(make_reverse_iterator(pos), rend(v), pos, end(v), begin(res));

There is not any other standard library that is so flexible. We should be all so grateful to have it.

Here is what the final code looks like:

vector<int> res(v.size());
auto pos = find_if(begin(v), end(v), [](auto i){ return i>=0; }); // find first non-negative
transform(begin(v), end(v), begin(v), [](auto i){return i*i;}); // square them all
merge(make_reverse_iterator(pos), rend(v), pos, end(v), begin(res)); // the magic is here

After I wrote this code, I was just both thrilled and calm at the same time. I think I was in the zone.

There is something more.

I have some time to tackle two open points:

  • how to write the in-place solution?
  • how C++20 ranges step into the game?

The former is quite easy now, thanks to our standard library.

We have inplace_merge. It’s just the matter of arranging the input properly. You should know that inplace_merge expects two consecutive sorted ranges so I just have to reverse the negative part:

auto pos = find_if(begin(v), end(v), [](auto i){ return i>=0; });
transform(begin(v), end(v), begin(v), [](auto i){return i*i;});
reverse(begin(v), pos); // needed for inplace_merge
inplace_merge(begin(v), pos, end(v));

The solution above has not been found by insight, instead it was just a mere application of the library. If I had not known inplace_merge, I would have searched the reference for it.

Here is an example:

-4 -2 0 1 5
^
16 4 0 1 25 // transform
4 16 0 1 25 // reverse
0 1 4 16 25 // inplace_merge

Again, I have just applied some patterns. Patterns have both pre-conditions and post-conditions. Knowing them is important.

Now, let me show you a couple of snippets using ranges.

Here is the first one:

std::vector<int> res(v.size());
auto firstPos = ranges::find_if(v, [](auto i){ return i>=0; });
auto positives = ranges::subrange(firstPos, std::end(v));
auto negatives = ranges::subrange(std::make_reverse_iterator(firstPos), std::rend(v));

const auto square = [](auto i) { return i*i; };
ranges::merge(ranges::views::transform(positives, square),
ranges::views::transform(negatives, square),
std::begin(res));

The second one:

std::vector<int> res(v.size());
auto positives = views::drop_while(v, [](auto i){ return i<0; });
auto negatives = views::drop_while(views::reverse(v), [](auto i) { return i>=0; });

const auto square = [](auto i) { return i*i; };
ranges::merge(views::transform(positives, square),
views::transform(negatives, square),
std::begin(res));

I think this one does a bit more work because drop_while skips all the negatives to find the first positive and so does to find the first negative. The other solution, instead, visits the positive values only once.

The tale ends here. I think the point is: learning never stops when we are ready to get infected by other people ideas.

No matter if it’s another language, another style, another topic. We can always learn something from others.

In this specific example, I spent some time on a problem and I got stuck to my solution. The code I wrote was just fine, it worked, and I could say to myself “my solution works, I do not care about knowing others”. But I would have missed a big opportunity. I wouldn’t have found new patterns, the in-place solution, and the application of ranges.

And, hopefully, I have shared some ideas useful for you.

Sometimes we just need to let other people into our process of learning.

In the same way, we should share our ideas and results because those can possibly inspire others.

My previous post has been well received by the ecosystem so I have decided to write a short follow-up article on another classical problem that can be solved with similar patterns.

In finance, the drawdown is the measure of the decline from a historical peak in some series of data (e.g. price of a stock on a certain period of time).

For example, here is the hypothetical price series of the fake “CPlusPlus” stock:

You know, the 2008 crisis affected C++ too, renaissance in 2011/2012, some disappointments in 2014/2015 because C++14 was a minor release and “Concepts” didn’t make it, and nowadays the stock is increasing since programmers feel hopeful about C++20.

Drawdowns are the differences between the value at one year and the value at the previous maximum peak. For instance, in 2008 we have a drawdown of 28-12 (16) and in 2015 the “dd” is 35-21 (14):

The Maximum drawdown is just the highest value among all the drawdowns. In the series above, the maximum drawdown is 16.

In economics, MDD is an indicator of risk and so an important problem to solve.

Let’s see how to solve this problem and how to bring more value out of it.

The maximum difference

The MDD problem can be formulated as follows: given an array A, find the maximum difference A[j] - A[i] with j < i. The constraint on i and j makes this problem challenging. Without it we can just find the maximum and minimum elements of the array.

The bruce force approach is quadratic in time: for each pair of indexes i, j we keep track of the maximum difference. The code follows:

int MaxDrawdownQuadratic(const vector<int>& stock)
{
    auto mdd = 0;
    for (auto j=0u; j<stock.size(); ++j)
    {
        for (auto i=j; i<stock.size(); ++i)
        {
            mdd = std::max(mdd, stock[j] - stock[i]);
        }
    }        
    return mdd;
}

When I facilitate Coding Gym, I suggest the attendees who are stuck to start from the naive solution, if any. The naive solution might be a red herring but it can lead to the optimal solution when we ask and answer key questions.

In this case the key question is “how can we remove/optimize the inner loop?”.

The current solution starts from stock[j] and it goes forward to calculate the differences between that value and all the following ones. This approach considers stock[j] like a peak and goes forward.

The key question turns to “how to avoid going through all the following elements”?

Think about the difference stock[j] - stock[i]. For every i, such a difference is maximized when stock[j] is the maximum value so far (for each j from 0 to i - 1). Thus, we can ignore all the other values since stock[j] - stock[i] would be lower.

The insight comes when we “reverse” our way of thinking about the pairs: we shouldn’t start from stock[j] and then go forward to find the lowest value. Instead, we should start from stock[i] having the previous biggest value cached!

So, instead of looking at each pair, we can just keep track of the maximum value preceding any other index. Here is the idea:

int MaxDrawdown(const vector<int>& stock)
{
    auto mdd = 0;
    auto maxSoFar = stock.front();
    for (auto i=1u; i<stock.size(); ++i)
    {
        mdd = std::max(mdd, maxSoFar - stock[i]);
        maxSoFar = std::max(maxSoFar, stock[i]);
    }
    return mdd;
}

The solution is linear in time.

Now it’s time for me to show you how to get more out of this problem. It’s time for me to inspire you to achieve more every time you face with puzzles. There is no limitation.

In the rest of the post, I’m just freely playing with the problem to express myself.

Emerging patterns

As I elaborated in the previous post, sometimes we can combine known patterns to solve problems.

I don’t have a special recipe that brings out patterns from code. Sometimes I just “see” them between the lines (as for Kadane). Other times I try some tricks to discover them. Or I don’t find any patterns at all.

I recently met again the MDD problem (after some years) and it looked to me similar to Kadane. I was biased by Kadane and my mind was unconsciously suggesting me to apply similar patterns to the MDD problem because the code looked similar. It might be dangerous! It doesn’t work every time. Ever heard about “the hammer syndrome”? This is the caveat of “thinking inside the box”. Anyway, in this case my sixth sense was right.

First of all I had an intuition: I realized that all the evolving values of maxSoFar¬† are totally independent from any other decision points of the algorithm. I could enumerate them separately. One trick to use when searching for patterns is asking the question “which computations can be isolated or extracted?”.

maxSoFar is just a cumulative maximum. For instance:

4 3 2 6 8 5 7 20

The cumulative maximum is:

4 4 4 6 8 8 8 20

The pattern that can generate such a series is prefix sum (when “sum” is not addition but maximum).

So I refactored the original code by isolating the cumulative maximum calculation into a separate vector:

int MaxDrawdown(const vector<int>& stock)
{
    std::vector<int> maxs(stock.size());
    std::partial_sum(std::begin(stock), std::end(stock), std::begin(maxs), [](auto l, auto r) { return std::max(l, r); });
    auto mdd = 0;
    for (auto i=1u; i<stock.size(); ++i)
    {
        mdd = std::max(mdd, maxs[i] - stock[i]);       
    }
    return mdd;
}

The next trick is to figure out if the loop hides another pattern. The question is: what kind of operation is underneath the calculation of mdd?

We have some hints:

  • at every step we calculate maxs[i] - stock[i] so we read the ith-value from two sequences,
  • every result of such a calculation is then reduced by applying std::max

Do you know this pattern?

Sure! It’s zip | map | reduce!

See this post for more details.

In other words:

  • zip maxs with stock (we pair them off)
  • map every pair with subtraction
  • reduce the intermediate results of map with std::max

In C++ we can express such a pattern with std::inner_product (I’m not saying “use this in production”, I’m just letting by brain work):

int MaxDrawdown(const vector<int>& stock)
{
    std::vector<int> maxs(stock.size());
    std::partial_sum(std::begin(stock), std::end(stock), std::begin(maxs), [](auto l, auto r) { return std::max(l, r); });
    return std::inner_product(std::begin(maxs), std::end(maxs), 
                              std::begin(stock), 
                              0, 
                              [](auto l, auto r) { return std::max(l, r); }, 
                              std::minus<>{});
}

Now we have a solution that is harder for people not familiar with STL algorithms, an additional scan as well as more memory usage…

Beauty is free

First of all, although the code is not intended for production use, I am already satisfied because my brain has worked out. As you see, the line between production code and “training code” might be more or less marked. In my opinion, our brain can benefit from both training and production “styles” (when they differ).

Now, I would like to push myself even more by giving my own answer to the following question:

What might this code look like in next-generation C++?

What about using ranges? Might that help solve the issues introduced before?

Here is my answer:

int MaxDrawdown(const vector<int>& stock)
{
    auto maxs = view::partial_sum(stock, [](auto l, auto r){ return std::max(l, r); });
    auto dds = view::zip_with(std::minus<>{}, maxs, stock);
    return ranges::max(dds);
}

The combination of view::zip_with and ranges::max has displaced std::inner_product. In my opinion, it’s much more expressive.

I hope someone will propose and defend some function objects for min and max so we can avoid the lambda – after all, we have std::minus and std::plus, why not having std::maximum and std::minimum (or such)?

If you are wondering if this code does only one scan, the answer is yes. Every view here is lazy and does not use extra memory.

We can happily argue again that “beauty is free”.

Note:

usually the MDD is calculated as a ratio because it makes more sense to display it as a percentage. For example:

float MaxDrawdown(const vector<int>& stock)
{
    auto maxs = view::partial_sum(stock, [](auto l, auto r){ return std::max(l, r); });
    auto dds = view::zip_with([](auto peak, auto ith) { return (float)(peak-ith)/peak; }, maxs, stock);
    return 100.0f * ranges::max(dds);
}

 

Playing with patterns

Consider again the brute force solution:

int MaxDrawdownQuadratic(const vector<int>& stock)
{
    auto mdd = 0;
    for (auto j=0u; j<stock.size(); ++j)
    {
        for (auto i=j; i<stock.size(); ++i)
        {
            mdd = std::max(mdd, stock[j] - stock[i]);
        }
    }        
    return mdd;
}

We have seen that the optimal solution consists in just scanning the array forward and “caching” the biggest stock[j] so far.

A similar schema applies if we think the solution backwards: we scan the array backwards and cache the lowest price so far:

int MaxdrawdownBackwards(const vector<int>& stock)
{
    auto mdd = 0;
    auto minSoFar = stock.back();
    for (auto i=stock.size()-1; i>0; --i)
    {
        mdd = std::max(mdd, stock[i-1] - minSoFar);
        minSoFar = std::min(minSoFar, stock[i-1]);
    }
    return mdd;
}

Getting to the ranges-based solution is not so hard since we know how the problems is broken down into patterns: the forward cumulative maximum is replaced with the backward cumulative minimum. Still the prefix sum pattern. We just change the proper bits:

  • stock is iterated backwards
  • std::min displaces std::max

zip | map | reduce stays the same except for the inputs order (we have to subtract stock[i] to the i-th minimum) and the direction of stock (backwards).

Thus, here is the code:

int MaxDrawdownBackwards(const vector<int>& stock)
{
   auto mins = view::partial_sum(view::reverse(stock), [](auto l, auto r){ return std::min(l, r); });
   return ranges::max(view::zip_with(std::minus<>{}, view::reverse(stock), mins));
}

If you have some difficulties at this point, write down the “intermediate” STL code without ranges.

The same challenge gave us the opportunity to find another solution with the same patterns.

Playing with patterns is to programmers creativity as playing with colors is to painters creativity.

Playing with patterns is a productive training for our brain.

 

Problem variations

Playing with patterns is also useful for tackling problem variations fluently. For instance, if the problem changes to “calculate the minimum drawdown”, we just have to replace ranges::max with ranges::min. That’s possible because we know how the problem has been broken down into patterns.

The MDD problem has interesting variations that can be solved with the same patterns (customizing the proper bits). A couple of challenges are left to the willing reader:

  • Given an array A, find the maximum difference A[j] - A[i] with i < j (MDD is j < i).
    Rephrasing: given a series of stock prices, find the maximum profit you can obtain by buying the stock on a certain day and selling it in a future day. Try your solutions here (alas, up to C++14).
  • Given an array of stock prices, each day, you can either buy one share of that stock, sell any number of shares of stock that you own, or not make any transaction at all. What is the maximum profit you can obtain with an optimum trading strategy? See the full problem and try your solutions here (alas, up to C++14).

 

Have fun practicing with STL algorithms and ranges!

“How many moves ahead could you calculate?”

“Just one, the best one”.

The legendary answer by José Capablanca Рa world chess champion of the last century Рindicates a commonly known fact: chess champions win by being better at recognizing patterns that emerge during the game. They remember meaningful chess positions better than beginners. However, experts do not remember random positions effectively better than non-experts.

Patterns serve as a kind of shorthand that’s easier to remember than a meaningless configuration of pieces that could not occur in a real game.

This fact does not apply to chess only. Our brain works by constantly recognizing, learning and refining patterns on the world.

The reason is efficiency: the brain applies such an optimization to ignore some of the possible choices we have in every situation. Thus, experts get better results while thinking less, not more.

You should know another category of experts who are usually good at recognizing patterns: programmers.

We, programmers, get better results while thinking in patterns. We decompose complex problems in combinations of simpler patterns. However, many problems cannot be solved with known patterns. Here is where creativity kicks in.

However, not only creativity enables human beings to solve unknown problems with new ideas, but it’s also the capacity to reinterpret known problems in new and inspiring ways. It’s the art of creating new associations. As Jules Henri Poincar√© once said “creativity is the ability of unite pre-existing elements in new combinations that are useful”. In programming, “pre-existing elements” are commonly called patterns.

That’s the spirit of this article. I will revisit a classical problem from another perspective.

The article is a bit verbose because the style is didactic: I spend some time explaining the example problem and the solution. If you already know the Maximum Subarray Problem, you can skip the following section. Even though you know Kadane’s algorithm, it’s worth reading the dedicated section anyway because I get to the solution from a slightly different point of view than the canonical one.

The Maximum Subarray Problem

Let me introduce one protagonist of the story, the famous “Maximum Subarray” problem whose linear solution has been designed by Joseph “Jay” Kadane in the last century.

Here is a simple formulation of the problem:

Given an array of numbers, find a contiguous subarray with the largest sum.

We are just interested in the value of the largest sum (not the boundaries of the subarray).

Example:

[-3,1,-3,4,-1,2,1,-5,4]

Max subarray: [4,-1,2,1]. Sum: 6.

Clearly, the problem is interesting when the array contains negative numbers (otherwise the maximum subarray is the whole array itself).

The literature around this problem is abundant. There exist different conversations about that. We’ll focus only on Kadane’s algorithm, the most famous (and efficient) solution.

The theory behind Kadane’s algorithm is not straightforward and it’s beyond the scope of this didactic post. The algorithm lies in an area of programming called Dynamic Programming, one of the hardest techniques to solve problems in computer science – and one of the most efficient as well. The basic principle of Dynamic Programming consists in breaking a complex problem into simpler sub-problems and caching their results.

For example, the task of calculating “1+1+1+1” can be broken down this way: first we calculate “1+1=2”, then we add “1” to get “3” and finally we add “1” to get “4”. Implicitly, we have “cached” the intermediate results since we did not started from scratch every time – e.g. to calculate (1+1)+1 we started from “1+1=2”. A more complex example is Fibonacci: each number is calculated from the famous formula fibo(n) = fibo(n-1) + fibo(n-2).

For example:

fibo(4) = fibo(3) + fibo(2)

= fibo(2) + fibo(1) + fibo(2)

= fibo(1) + fibo(0) + fibo(1) + fibo(2)

= fibo(1) + fibo(0) + fibo(1) + fibo(1) + fibo(0)

The sub-problems are called “overlapping” since we solve the same sub-problem multiple times (fibo(2) called twice and fibo(1) and fibo(0) three times). However, the main characteristic of Dynamic Programming is that we do not recalculate the sub-problems that we have already calculated. Instead, we “cache” them. Without stepping into further details, there exist two opposite approaches which come with a corresponding caching strategy: Top-Down and Bottom-Up. Roughly speaking, the former is recursive, the latter is iterative. In both we maintain a map of already solved sub-problems. More formally, in the Top-Down approach the storing strategy is called memoization, whereas in the Bottom-Up one it is called tabulation.

In Bottom-Up we go incrementally through all the sub-problems and reuse the previous results. For instance:

table[0] = 0;
table[1] = 1;

for(auto i=2; i<=n; ++i)
   table[i] = table[i-1] + table[i-2];
return table[n];

On the other hand, Top-Down involves recursion:

// suppose memo has size n+1
int fibo(std::vector<int>& memo, int n) 
{
   if(n < 2)
      return n;

   if(memo[n] != 0)
      return memo[n];

   memo[n] = fibo(memo, n-1) + fibo(memo, n-2);
   return memo[n];
}

Now that we have a bit of background, let’s quickly meet Kadane’s algorithm.

Kadane’s algorithm

Kadane’s algorithm lies in the Bottom-Up category of Dynamic Programming, so it works by first calculating a solution for every sub-problem and then by using the final “table” of results in some way. In Fibonacci, we use the table by just popping out its last element. In Kadane we do something else. Let’s see.

My explanation of the algorithm is a bit different from the popular ones.

First of all, we should understand how the table is filled. Differently than Fibonacci, Kadane’s table[i] does not contain the solution of the problem at index i. It’s a “partial” result instead. Thus, we call such a table “partial_maxsubarray”.

partial_maxsubarray[i] represents a partial solution on the subarray ending at the ith index and including the ith element. The last condition is the reason why the result is partial_. Indeed, the final solution might not include the ith element.

Let’s see what this means in practice:

[-3,1,-3,4,-1,2,1,-5,4]

partial_maxsubarray[0] means solving the problem only on [-3], including -3.
partial_maxsubarray[1] is only on [-3, 1], including 1.
partial_maxsubarray[2] is only on [-3, 1, -3], including -3.
partial_maxsubarray[3] is only on [-3, 1, -3, 4], including 4.
partial_maxsubarray[4] is only on [-3, 1, -3, 4, -1], including -1.
partial_maxsubarray[5] is only on [-3, 1, -3, 4, -1, 2], including 2.
partial_maxsubarray[6] is only on [-3, 1, -3, 4, -1, 2, 1], including 1.
partial_maxsubarray[7] is only on [-3, 1, -3, 4, -1, 2, 1, -5], including -5.
partial_maxsubarray[8] is only on [-3, 1, -3, 4, -1, 2, 1, -5, 4], including 4.

For each index i, the ith element will be included in the partial_maxsubarray calculation. We have only a degree of freedom: we can change where to start.

Consider for example partial_maxsubarray[2]. If the main problem was on [-3, 1, -3], the solution would have been 1 (and the subarray would have been [1]). However, partial_maxsubarray[2] is -2 (and the subarray is [1, -3]), because of the invariant.

Another example is partial_maxsubarray[4] that is not 4 as you might expect, but 3 (the subarray is [4, -1]).

How to calculate partial_maxsubarray[i]?

Let’s do it by induction. First of all, partial_maxsubarray[0] is clearly equal to the first element:

partial_maxsubarray[0] = -3

Then, to calculate the next index (1) we note that we have only one “degree of freedom”: since we must include 1 anyway, we can either extend the current subarray by one (getting [-3, 1]) or we can start a new subarray from position 1. Let me list the two options:

  1. extend the current subarray, getting [-3, 1], or
  2. start a new subarray from the current index, getting [1].

The choice is really straightforward: we choose the subarray with the largest sum! Thus, we choose the second option (partial_maxsubarray[1] = 1).

To calculate partial_maxsubarray[2]:

  1. keep 1, [1, -3], or
  2. start a new subarray [-3]

Clearly, the former is better (partial_maxsubarray[2] = -2).

Again, partial_maxsubarray[3]:

  1. keep 4, [1, -3, 4], or
  2. start a new subarray [4]

The latter is larger (partial_maxsubarray[3] = 4).

Do you see the calculation underneath?

For each index, we calculate partial_maxsubarray[i] this way:

partial_maxsubarray[i] = max(partial_maxsubarray[i-1] + v[i], v[i])

At each step i, we decide if either start a new subarray from i or extend the current subarray by one on the right.

Once we have filled partial_maxsubarray, do you see how to use it to calculate the solution to the main problem?

Let’s recall how we calculated partial_maxsubarray[2]:

partial_maxsubarray[0] = -3
partial_maxsubarray[1] = 1

partial_maxsubarray[2] = max(partial_maxsubarray[1] + v[2], v[2])

Since v[2] is -3, we ended up with -2. Thus, partial_maxsubarray[1] is larger than partial_maxsubarray[2].

Running the algorithm on the remaining indexes we get:

partial_maxsubarray[3] = 4
partial_maxsubarray[4] = 3
partial_maxsubarray[5] = 5
partial_maxsubarray[6] = 6
partial_maxsubarray[7] = 1
partial_maxsubarray[8] = 5

It turns out that partial_maxsubarray[6] has the largest value. This means there is a subarray ending at index 6 having the largest sum.

Thus, the solution to the main problem is simply calculating the maximum of partial_maxsubarray.

Let’s write down the algorithm:

int kadane(const vector<int>& v)
{
    vector<int> partial_maxsubarray(v.size());
    partial_maxsubarray[0] = v[0];
    for (auto i = 1u; i<v.size(); ++i) 
    {
      partial_maxsubarray[i] = std::max(partial_maxsubarray[i-1] + v[i], v[i]);
    }

    return *max_element(begin(partial_maxsubarray), end(partial_maxsubarray));
}

If you knew this problem already, you have probably noticed this is not the canonical way to write Kadane’s algorithm. First of all, this version uses an extra array (partial_maxsubarray) that is not used at all in the classical version. Moreover, this version does two iterations instead of just one (the first for loop and then max_element).

“Marco, are you kidding me?” – Your subconscious speaks loudly.

Stay with me, you won’t regret it.

Let me solve the two issues and guide you to the canonical form.

To remove the support array, we need to merge the two iterations into one. We would kill two birds with one stone.

We can easily remove the second iteration (max_element) by calculating the maximum along the way:

int kadane(const vector<int>& v)
{
    vector<int> partial_maxsubarray(v.size());
    partial_maxsubarray[0] = v[0];

    auto maxSum = partial_maxsubarray[0];
    for (auto i = 1u; i<v.size(); ++i) 
    {       
        partial_maxsubarray[i] = std::max(partial_maxsubarray[i-1] + v[i], v[i]);
        maxSum = max(maxSum, partial_maxsubarray[i]);
    }   
    return maxSum;
}

After all, a maximum is just a forward accumulation – it never goes back.

Removing the extra array can be done by observing that we do not really use it entirely: we only need the previous element. Even in Fibonacci, after all we only need the last two elements to calculate the current one (indeed, removing the table in Fibonacci is even easier). Thus, we can replace the support array with a simple variable:

int kadane(const vector<int>& v)
{
    int partialSubarraySum, maxSum;
    partialSubarraySum = maxSum = v[0];
    for (auto i = 1u; i<v.size(); ++i) 
    {       
        partialSubarraySum = max(partialSubarraySum + v[i], v[i]);
        maxSum = max(maxSum, partialSubarraySum);
    }   
    return maxSum;
}

The code above is likely more familiar to readers who already knew Kadane’s algorithm, isn’t it?

Now, let’s have some fun.

Emerging patterns

As most people, the first time I saw Kadane’s algorithm, it was in the canonical form. At the time, I didn’t notice anything particular. It was 2008 and I was at the university.

Many years passed and I met the problem again in 2016. In the last years, I have been regularly practicing with coding challenges to develop my ability to “think in patterns”. With “pattern” I mean simply a “standard solution to a standard problem, with some degree of customization”. For example, “sorting an array of data” or “filtering out a list” are patterns. Many implementations of patterns are usually provided in programming languages standard libraries.

I am used to consider every C++ standard algorithm as a pattern. For example, std::copy_if and std::accumulate are, for me, two patterns. Some algorithms are actually much more general in programming. For example, std::accumulate is usually known in programming as fold or reduce. I have talked about that in a previous post. On the other hand, something like std::move_backwards is really C++ idiomatic.

Thinking in patterns can be some good for many reasons.

First of all, as I have mentioned at the beginning of this article, our brain is designed to work this way. Cognitive scientist call “the box” our own state of the art, our own model of the world that enables us to ignore alternatives. Clearly, the box has pros and cons. Constantly thinking inside the box works as long as we deal with known problems. Thinking outside the box is required to solve new challenges. This is creativity.

When I think of creativity, I think of cats: they can be coaxed but they don’t usually come when called. We should create conditions which foster creativity. However, something we can intentionally influence is training our own brain with pattern recognition and application. To some extent, this is like “extending our own box”. This is what I have been doing in the last years.

Another benefit of thinking in patterns is expressivity. Most of the times, patterns enable people to express algorithms fluently and regardless of the specific programming language they use. Patterns are more declarative than imperative. Some patterns are even understandable to non-programmers. For example, if I ask my father to “sort the yogurt jars by expiration date and take the first one”, that’s an easy task for him.

So, in 2016 something incredible happened. When I met again Kadane’s algorithm, my brain automatically recognized two patterns from the canonical form. After visualizing the patterns in my mind, I broke down the canonical form in two main parts. This is why I first showed you this version of the algorithm:

int kadane(const vector<int>& v)
{
    vector<int> partial_maxsubarray(v.size());

    partial_maxsubarray[0] = v[0];
    for (auto i = 1u; i<v.size(); ++i) 
    {
      partial_maxsubarray[i] = std::max(partial_maxsubarray[i-1] + v[i], v[i]);
    }

    return *max_element(begin(partial_maxsubarray), end(partial_maxsubarray));
}

The second pattern is clearly maximum (that is a special kind of reduce, after all).

What is the first one?

Someone might think of reduce, but it is not. The problem with reduce is that it does not have “memory” of the previous step.

The pattern is prefix sum. Prefix sum is a programming pattern calculating the running sum of a sequence:

array = [1, 2, 3, 4]
psum  = [1, 3, 6, 10]

How does that pattern emerge from Kadane’s algorithm?

Well, “sum” is not really an addition but it’s something different. The update function emerges from the loop:

thisSum = std::max(previousSum + vi, vi);

Imagine to call this line of code for every element of v (vi).

In C++, prefix sum is implemented by partial_sum. The first element of partial_sum is just v[0].

Here is what the code looks like with partial_sum:

int kadane(const vector<int>& v)
{
    vector<int> partial_maxsubarray(v.size());

    partial_sum(begin(v), end(v), begin(partial_maxsubarray), [](auto psumUpHere, auto vi){
        return max(psumUpHere + vi, vi);
    });
    
    return *max_element(begin(partial_maxsubarray), end(partial_maxsubarray));
}

When I ran this code getting a green bar I felt very proud of myself. I didn’t spend any effort. First of all, my brain recognized the pattern from the hardest version of the code (the canonical form). My brain popped this insight from my unconscious tier to my conscious reasoning. Then I did an intermediate step by arranging the code in two main parts (the cumulative iteration and then the maximum calculation). Finally, I applied partial_sum confidently.

You might think this is useless. I think this is a great exercise for the brain.

There is something more.

Since C++17, the code is easy to run in parallel:

int kadane(const vector<int>& v)
{
    vector<int> partial_maxsubarray(v.size());

    inclusive_scan(execution::par, begin(v), end(v), begin(partial_maxsubarray), [](auto psumUpHere, auto vi){
        return max(psumUpHere + vi, vi);
    });

    return *max_element(execution::par, begin(partial_maxsubarray), end(partial_maxsubarray));
}

inclusive_scan is like partial_sum but it supports parallel execution.

Beauty is free

Some time ago I read a short post titled “Beauty is free” that I cannot find anymore. The author showed that the execution time of an algorithm coded with raw loops gave the same performance as the same one written with STL algorithms.

Compared to the canonical form, our “pattern-ized” alternative does two scans and uses an extra array. It’s clear that beauty is not free at all!

The reason why I am writing this article now and not in 2016 is that I have finally found some time to try my solution with range v3. The result – in my opinion – is simply beautiful. Check it out:

int kadane(const vector<int>& v)
{
   return ranges::max(view::partial_sum(v, [](auto s, auto vi) { return std::max(s+vi, vi); }));
}

view::partial_sum is a lazy view, meaning that it applies the function to the ith and (i-1)th elements only when invoked. Thus, the code does only one scan. Moreovwer, the support array is vanished.

Running a few performance tests with clang -O3, it seems that the optimizer does a better job on this code rather than on the canonical form. On the other hand, the code does not outperform the canonical one on GCC. As I expect, running the range-based code in debug is about 10 times slower. I have not tried on Visual Studio. My tests were not accurate so please take my affirmations with a grain of salt.

I would like to inspire you to take action. Practice is fundamental.

A common question people ask me is “how can I practice?”. This deserves a dedicated post. Let me just tell you that competitive programming websites are a great source of self-contained and verifiable challenges, but they are not enough. You should creatively use real-world problems to practice.

Ranges are the next-generation of STL. Ranges are the next-generation of C++.

However, if you want to learn how to use ranges, you have to know and apply STL patterns first.

Ranges are beyond the scope of this article. The documentation is here. A few good posts here and here. Help is needed as it was in 2011 to popularize C++11.

I hope to blog again on some extraordinary patterns and how to use them. For now, I hope you have enjoyed the journey through a new interpretation of Kadane’s algorithm.


Some scientific notions of this article come from The Eureka Factor.

C++17 added support for non-member std::size, std::empty and std::data. They are little gems for generic programming.¬†Such functions have the same purpose of std::begin and the rest of the family: not only can’t you call functions on C-arrays (e.g. arr.begin() or arr.size()), but also free-functions allow for more generic programming because they can be added afterwards on classes you cannot modify.

This post is just a note about using std::size and std::empty on static C-strings (statically sized). Maybe it’s a stupid thing but I found more than one person¬†others than me falling into such “trap”. I think it’s worth sharing.

To make it short, some time ago I was working on a generic function to compare strings under a certain logic that is not important to know. In an ideal world I would have used std::string_view, but I couldn’t mainly for backwards-compatibility. I could, instead, put a couple of template parameters. Imagine this simplified signature:


template<typename T1, typename T2>
bool compare(const T1& str1, const T2& str2);

Internally, I was using std::size, std::empty and std::data to implement my logic. To be fair, such functions were just custom implementations of the standard ones (exhibiting exactly the same behavior) – because at that time C++17 was not available yet on my compiler and we have had such functions for a long time into our company’s C++ library.

compare could work on std::string, std::string_view (if available) and static C-strings (e.g. “hello”). While setting up some unit tests, I found something I was not expecting. Suppose that compare on two equal strings results true, as a normal string comparison:


EXPECT_TRUE(compare(string("hello"), "hello"));

This was not passing at runtime.

Internally, at some point, compare was using std::size. The following is true:


std::size(string("hello")) != std::size("hello");

The reason is trivial: “hello” is just a statically-sized array of 6 characters. 5 + the null terminator. When called in this case, std::size just gives back the real size of such array, which clearly includes the null terminator.

As expected, std::empty follows std::size:


EXPECT_TRUE(std::empty("")); // ko

EXPECT_TRUE(std::empty(string(""))); // ok

EXPECT_TRUE(std::empty(string_view(""))); // ok

Don’t get me wrong, I’m not fueling an argument: the standard is correct. I’m just saying we have to be pragmatic and handle this subtlety. I just care about traps me and my colleagues can fall into. All the people I showed the failing expectations above just got confused. They worried about consistency.

If std::size is¬†the “vocabulary function” to get the length of anything, I think it should be easy and special-case-free. We use std::size because we want to be generic and handling special cases is the first enemy of genericity. I think we all agree that std::size on null-terminated strings (any kind) should behave as strlen.

Anyway, it’s even possible that we don’t want to get back the length of the null-terminated string (e.g. suppose we have an empty string buffer and we want to know how many chars are available), so the most correct and generic implementation of std::size is the standard one.

Back to compare function I had two options:

  1. Work around this special case locally (or just don’t care),
  2. Use something else (possibly on top of std::size and std::empty).

Option 1 is “local”: we only handle that subtley for this particular case (e.g. compare function). Alas, next usage of std::size/empty possibly comes with the same trap.

Option 2 is quite intrusive although it can be implemented succinctly:


namespace mylib
{
   using std::size; // "publish" ordinary std::size
   // on char arrays
   template<size_t N>
   constexpr auto size(const char(&)[N]) noexcept
   {
      return N-1;
   }

   // other overloads...(e.g. wchar_t)
}

You can even overload on const char* by wrapping strlen (or such). This implementation is not constexpr, though. As I said before: we cannot generally assume that the size of an array of N chars is N – 1, even if it’s null-terminated.

mylib::empty is similar.

EXPECT_EQ(5, mylib::size("hello"));  // uses overload
EXPECT_EQ(5, mylib::size(string("hello")); // use std::size
EXPECT_EQ(3, (mylib::size(vector<int>{1,2,3})); // use std::size

Clearly, string_view would solves most of the issues (and it has constexpr support too), but I think you have understood my point.

[Edit] Many people did not get my point. In particular, some have fixated on the example itself instead of getting the sense of the post. They just suggested string_view for solving this particular problem. I said that string_view would help a lot here, however I wrote a few times throughout this post that string_view was not viable.

My point is just be aware of the null-terminator when using generic functions like std::size, std::empty, std::begin etc because the null-terminator is an extra information that such functions don’t know about. That’s it. Just take actions as you need.

Another simple example consists in converting¬†a sequence into a vector of its underlying type. We don’t want to store the null-terminator for char arrays. In this example we don’t even need to use std::size but just std::begin and std::end (thanks to C++17 template class deduction):

template<typename T>
auto to_vector(const T& seq)
{
  return vector(begin(seq), end(seq));
}

Clearly, this exhibits the same issue discussed before, requiring extra logic/specialization for char arrays.

I stop here, my intent was just to let you know about this fact. Use this information as you like.

 

Conclusions

TL;DR: Just know how std::size and std::empty work on static C-strings.

  • static C-strings are null-terminated arrays of characters (size = number of chars + 1),
  • std::size and std::empty on arrays simply give the total number of elements,
  • be aware of the information above when using std::size and std::empty on static C-strings,
  • it’s quite easy to wrap std::size and std::empty for handling strings differently,
  • string_view could be helpful.

I know, it’s been a while since the last time I published something on my blog. The main reason is that in my spare time – apart from private life – I’ve been committed to organize events and activities in Italy and also to work on a personal project with a great friend of mine. Anyway, I found some time to share a new blog post I hope you will like.

This article is also part of my series C++ in Competitive Programming.

In the very first installment of this series, I showed an example whose solution amazed some people. Let me recall the problem: we have to find the minimum difference between any two elements in a sorted sequence of numbers. For example:

[10, 20, 40, 100, 200, 300, 1000]

The minimum difference is 10, that is 20-10. Any other combination is greater. Then, I showed an unrolled solution for such problem (not the most amazing one!):

vector<int> elems = ...;
auto minDiff = numeric_limits<int>::max();
for (auto i=0; i<elems.size()-1; ++i)
{
minDiff = min(elems[i+1]-elems[i], minDiff);
}

Imagine there is always at least one element in the sequence. Note that we calculate elems[i+1]-elems[i] for each i from 0 to length-1, meanwhile, we keep track of the maximum of such differences. I see a pattern, do you?

Before getting to the point, let me show you another example. We want to calculate the number of equal adjacent characters in a string:

ABAAACCBDBB

That is 4:

ABAAACCBDBB

Again, here is a solution:

string s = ...;
auto cnt = 0;
for (auto i=1; i<s.size(); ++i)
{
if (s[i]==s[i-1])
cnt++;
}
cout << cnt;

We compare s[i] with s[i-1] and, meanwhile, we count how many trues we get.¬†Careful readers will spot a little difference from the previous example: at each step, we access s[i] and s[i-1], not s[i+1] and s[i]. I’ll develop this subtlety in a moment. Now, please, give me another chance to let you realize the pattern yourself, before I elaborate more.

This time we have two vectors of the same size, containing some values and we want to calculate the maximum absolute difference between any two elements. Imagine we are writing a test for a numerical computation, the first vector is our baseline (expectation) and the second vector is the (actual) result of the code under test. Here is some code:

vector<double> expected = ...;
vector<double> actual = ...;
double maxDifference = 0.0; // fabs cannot be smaller than 0
for (auto i=0; i<expected.size(); ++i)
{
maxDifference = max(maxDifference, fabs(expected[i]-actual[i]));
}
cout << maxDifference;

Here we access the ith-elements of both the vectors at the same time. Is that similar to the other examples?

It’s time to get to the point, although you are already there if you have read the intro of this series – if you have not, please stay here and don’t spoil the surprise yourself!

Actually, there is not so much difference among the examples I showed. They are all obeying the same pattern: given two sequences, the pattern combines every two elements from input sequences at the same position using some function and accumulates these intermediate results along the way.

Actually, in functional programming, this pattern is the composition of three different patterns:

zip | map | fold

zip makes “pairs” from input sequences, map applies a function to each pair and returns some result, fold applies an¬†operation to reduce everything to a single element. A picture is worth a thousand words:

For simplicity, imagine that zip and map are combined together into a single operation called zipWith.

Now we have two customization points:

  1. which function zipWith uses to combine each pair, and
  2. which function fold uses to reduce each result of zipWith to a single element.

The general case of this pattern operates on any number of sequences, making a tuple for each application of zip (e.g. imagine we zip the rows of a matrix, we get its columns).

In C++ we have an algorithm that (partially) implements this pattern: inner_product. I said “partially” just because it accepts only two ranges and for this reason I say “pairs of elements”, not tuples – as in the general case. In C++17’s parallel STL, inner_product¬†is made in parallel by¬†transform_reduce (be aware of the additional requirements).

In the future we’ll do such things by using new tools that will be incorporated into the standard: ranges. For now, inner_product is an interesting and (sometimes) understimated tool we have. Regardless you are going to use such pattern in real-world code, I think that understanding when this pattern applies is mindblowing. If you regularly practice competitive programming as I do, you have the opportunity to recognize many patterns to solve your problems. In the last years I have found several cases when this pattern worked smoothly and I am sharing here a few.

The simplest form of¬†inner_product takes two ranges of the same length and a starting value, and it calculates the sum of the products of each pair. It’s literally an inner product between two vectors. inner_product has two additional customization points to replace “product” and “sum” as we wish.

Let’s have some fun.

It’s time to code the solutions to the previous challenges in terms of inner_product. I start from the latter.

I recall that we want to find the maximum absolute difference of two vectors of double. In this case we replace “product” with “absolute difference” and “sum” with “max”. Or, we combine each pair by calculating the absolute difference and we keep track of the maximum along the way. I stress the fact that we reduce the combined pairs along the way and not at the end: inner_product is a single-pass algorithm (e.g. it works on stream iterators).

Here is the code:

cout << inner_product(begin(expected), end(expected),
begin(actual),
0.0, // starting value
[](auto a, auto b){ return max(a,b); }, // "sum"
[](auto l, auto r) { return fabs(r-l); }); // "product"

I tend to use standard function objects as much as possible. They are just clearer, shorter and (should be) well-known. Thinking a bit more we come up with:

cout << inner_product(begin(expected), end(expected),
begin(actual),
0.0, // starting value
[](auto... a){ return max(fabs(a)...); }, // "sum"
minus<>{}); // "product"

Better than the other? It’s debatable, I leave the choice to you.

The other two challenges are on a single sequence, aren’t they? Does the pattern still apply?

It does.

Zipping two distinct ranges is probably more intuitive, but what about zipping a sequence with itself? We only have to pass correct iterators to inner_product. We want to combine s[i] with s[i-1]. zipWith should use operator== and fold operator+. For the first sequence we take S from the second character to the last character. For the second sequence we take S from the first character to the second last character. That is:

  1. S.begin() + 1, S.end()
  2. S.begin(), S.end()-1

We have to pass the first three iterators to inner_product:

cout << inner_product(next(begin(s)), end(s),
begin(s),
0,
plus<>{},
equal_to<>{});

We zip with equal_to which uses operator== under the hood and we fold with plus<> which applies operator+. As you see, not having to specify the second sequence’s boundary is quite handy. If the solution is not clear, I hope you will find this picture useful:

When equal_to is called, the left hand side is S[i] and the right hand side is S[i-1]. In other words, the first range is passed as the first parameter to zipWith and the second range as the second parameter.

Careful readers will spot a subtle breaking change: the solution is not protected against an empty string anymore. Advancing an iterator that is not incrementable (e.g. end) is undefined behavior. We have to check this condition ourself, if we need. This example on HackerRank never fall into such condition, so the solution is just fine.

Finally, in the first exercise we are requested to calculate the minimum difference between any two elements in a sorted sequence of numbers. I intentionally wrote elems[i+1]-elems[i] and not elems[i]-elems[i-1]. Why? Just to show you another form of the same pattern. This one I like less because the call to inner_product is more verbose:

auto minDiff = inner_product(v.begin(), v.end()-1, v.begin()+1, numeric_limits<int>::max(),
[](int l, int r){ return min(l, r); }, // "sum"
[](int l, int r) { return r-l; } // "product"
);

We can apply the other pattern by (mentally) turning the loop into elems[i] – elems[i-1]:

auto minDiff = inner_product(next(v.begin()), v.end(), v.begin(), numeric_limits<int>::max(),
[](auto... a){ return min(a...); }, // "sum"
minus<>{}
);

As before, the solution is not protected against an empty sequence. You understand that zipping a sequence on itself (by shifting it) is never protected against an empty range.

This pattern works in all the examples above just because the stride between two elements is 1. If it was greater, it would have failed – I know, we can use boost::strided or such but I don’t mind here. Basically, we have processed adjacent elements in a “window” of size 2. There are scenarios where this window can be larger and inner_product still applies.

As an example we take Max Min on HackerRank. This problem is very close to calculating¬†the¬†minimum difference between any two elements in a sorted sequence of numbers. It states:¬†Given a list of N non-unique integers, your task is to select K integers from the list such that its unfairness is minimized.¬†If (x1,x2,…,xk) are K numbers selected from the list, the unfairness is defined as:

max(x1, x2, …, xk) – min(x1, x2,…, xk)

A possible solution to this problem consists in sorting the sequence and applying zipWith | fold as we did in the very first example. The only difference is that the distance between the two elements we zip together is K-1:

sort(begin(v), end(v));
cout << inner_product(
next(v.begin(), K-1), v.end(),
v.begin(), numeric_limits<int>::max(),
[](auto... a) { return min(a...); },
std::minus<>{});

Do not misunderstand: inner_product still steps by one every time and still combines elements in pairs. It’s just that we zip the sequence with itself by shifting it by K-1 positions and not by just 1. Here is what I mean:

As you see, although the size of the window is K, inner_product still works in the same way as before. The pairs that it conceptually creates (remember that the first sequence is shifted by K-1 positions) are depicted below:

 

This works only if K is less than or equal to N (and N has to be at least 1).

The pattern fits this problem because we turn the sequence in a particular structure: we sort¬†it. We have to select K elements and we know that¬†min(x1,…,xk) is x1 and max(x1,…,xk) is xk. This is just an effect of sorting. So we just check all these possible windows, incrementally, by using only x1 and xk. We may ignore erverything inside the window. Another interesting property is that the first range passed to inner_product is always greater (the problem states that all elements are distinct) than the second, for each iteration. This is why we can use minus<> for zipWith. If we wanted the opposite, we would have changed the order of the iterators or we would have iterated backwards. Using algorithms make variations simpler than rolling a for loop.

 

Recap for the pragmatic C++ competitive coder:

  • In C++, (zip | map | fold) on two ranges is implemented by inner_product:
    • set the first callable to customize¬†fold (plus¬†by default);
    • set the second callable to customize (zip | map)¬†– combined in a single operation (multiplies by default);
    • the order of the iterators matter: the first range is the left hand side of zipWith, the second range is the right hand side;
  • zipping a sequence on itself is just the same pattern;
    • be aware it won’t work with ranges shorter than the number of positions we shift the sequence by (e.g. 1 in the first 3 examples);
  • praciting recognizing and understanding coding patterns is food for the brain.

string_view odi et amo

Posted: January 3, 2017 in Programming Recipes
Tags: ,

string_view-like wrappers have been successfully used in C++ codebases for years, made possible by libraries like boost::string_ref. I think all of you know that string_view has joined the C++ standard library since C++17.

Technically, basic_string_view is an object that can refer to a constant contiguous sequence of char-like objects with the first element of the sequence at position zero. The standard library provides several typedefs for standard character types and std::string_view is simply an alias for:

basic_string_view<char>

For simplicity, I’ll just refer to¬†string_view for the rest of the post but what I’m going to discuss¬†is valid for the other aliases as well.

You can imagine string_view as a smart const char* which provides any const member function of std::string as well as a few handy utilities to reduce its span. You cannot enlarge a string_view until you reassign it. Other languages (e.g. Go) have similar constructs that permit to grow the range as well as to participate in the ownership of such range. Although string_view does not, the power of such simple wrapper is huge, though.

The applications of string_view are many and it’s relatively simple to let¬†string_view join your codebase. For years, I’ve been using a proprietary implementation of string_view dated back to the¬†90s and then improved on the base of boost::string_ref and recently on std::string_view.¬†If¬†you start today, it’s very likely you can adoperate your compiler’s¬†string_view implementation (e.g. latest¬†Visual Studio 2017 RC, clang and GCC support it), you can grab an¬†implementation from the web or you can just use boost::string_ref or another library (e.g. Google’s, folly).

One can think that using string_view is as simple as using std::string with the only difference that string_view does not take the ownership of the char sequence and cannot change its content. That’s not completely true. Adoperating string_view requires you to pay attention to a few other traps that I’m going to describe later on. Before starting, let me¬†show you a couple of simple examples.

Generally speaking,¬†string_view is a good friend¬†when we need to do¬†text processing (e.g. parsing, comparing, searching), but first of all, string_view is an adapter:¬†it allows¬†different string types to be adapted into¬†a std::string-like container. This means that string_view¬†provides iterator support and STL naming conventions (e.g. size, empty).¬†To create a string_view, we only require a null-terminated const char* or both a const char* and a length. Note that¬†in the latter case we¬†don’t need the char sequence to be null-terminated.

Suppose now that our codebase hosts many different string types but we want to write only one function doing a certain task on constant strings. Can string_view help? It can, if the string types manage a contiguous sequence of characters and also provide (read) access to it. Examples:

CString cstr = ...
string_view cstrv {cstr.GetString()};
string stdstr = ...
string_view stdstrv {stdstr};
QString qstr = ...
string_view qstrv {qstr.toLatin1().constData()};

Then we may write only one function for our task:

ReturnType readonly_on_string_function(string_view sv); // only one implementation

Into¬†readonly_on_string_function we can exploit the whole set of const functions of std::string. Just this¬†simple capability is priceless. You know what I mean if you use more than three string types into your codebase ūüôā

To show you other string_view functionalities, let me consider the problem of splitting a string. This problem can be tackled in many ways (e.g. iterator-based, range-based, etc) but let me keep things simple:

vector<string> split(const string& str, const char* delims)
{
vector<string> ret;
string::size_type start = 0;
auto pos = str.find_first_of(delims, start);
while (pos != string::npos)
{
if (pos != start)
{
ret.push_back(str.substr(start, pos - start));
}
start = pos + 1;
pos = str.find_first_of(delims, start);
}
if (start < str.length())
ret.push_back(str.substr(start, str.length() - start));
return ret;
}
view raw String split.cpp hosted with ❤ by GitHub

The worst things of this function are (imho):

  • we create¬†a new string for each token (this possibly ends up with dynamic allocation);
  • we can split only std::string and no other types.

Since string_view provides every const function of string, let’s try simply replacing string with string_view:

vector<string_view> split(string_view str, const char* delims)
{
vector<string_view> ret;
string_view::size_type start = 0;
auto pos = str.find_first_of(delims, start);
while (pos != string_view::npos)
{
if (pos != start)
{
ret.push_back(str.substr(start, pos - start));
}
start = pos + 1;
pos = str.find_first_of(delims, start);
}
if (start < str.length())
ret.push_back(str.substr(start, str.length() - start));
return ret;
}

Not only is the code still valid, but also potentially less demanding because we just allocate 8/16 bytes (respectively on 32 and 64 bit platforms – a pointer and a length) for each token.

Now, let’s use some¬†utilities to¬†shrink¬†the span. Suppose I get¬†a string from some¬†proprietary¬†UI framework control, providing its own string representation:

auto name = uiControl.GetText();

Then imagine we want to remove all the whitespaces from the start and the end of such string (we want to trim). We can do it without changing the string itself, just by using string_view:

string_view sv = name.GetString(); // GetString() returns a null-terminated const char*
sv.remove_prefix(std::min(v.find_first_not_of(" "), v.size())); // left trim
sv.remove_suffix(std::min(v.size()-v.find_last_not_of(" ")-1, v.size())); // right trim
view raw string_view trim.cpp hosted with ❤ by GitHub

remove_prefix moves the start of the view forward by n characters, remove_suffix does the opposite. Edge cases have been handled succinctly.

Now we have a string_view containing¬†only the “good” part of the string. At this point, let me end with a bang: we’ll use the sanitized string to¬†query a map without¬†allocating extra memory for the key. How? Thanks to¬†heterogeneous lookup of associative containers:

map<string, UserProfile, less<>> nameToProfile;
nameToProfile.find(sv); // won't create a temporary string for the key

That’s possible because less<> is a transparent comparator and string_view can be implicitly constructed from std::string (thus, we don’t need to write operator< between std::string and std::string_view). That’s powerful.

It should be clear that string_view can be dramatically helpful to your daily job and I think it’s quite useless to show you other examples to support this fact. Rather, let me¬†discuss a few¬†common pitfalls¬†I have met¬†in the last years and how to cope with them.

#1: “losing¬†sight of the string”

The first error I have encountered many times is storing string_view as a member variable and forgetting that it will not participate in the ownership of the char sequence:

class StatefulParser
{
public:
StatefulParser(string_view current) : current(current) {}
//...
string_view current;
};
StatefulParser Parse(const string& current)
{
return {current};
}

Suppose that Parse is never called with a temporary (moreover, we can enforce that assumption just by¬†deleting such overload), this code¬†is still fine because the¬†caller of Parse¬†has also ‘current’ in scope. Then some time later, a programmer that¬†is not very familiar with string_view (or who is simply heedless) puts the following error in the code:

StatefulParser Parse(const string& current)
{
string someProcessing = current.substr(...);
return {someProcessing}; // oops
}

‘someProcessing’ is a temporary string and then StatefulParser will¬†very likely refer to garbage.

So, string_view (as well as span, array_view, etc) is often not recommended as a data member. However, I think that string_view as data member sometimes is useful and in these scenarios we need to be prudent, just like using references and pointers as data members.

#2: replacing const string& with string_view

string_view seems¬†a drop-in replacement of const std::string&¬†because it provides the whole set of std::string‘s const functions and also because it’s a view (reference). So, the general rule you hear pretty much everywhere (especially nowdays that string_view has officially joined the C++ standard) is “whenever you see const string&, just replace it with string_view“.

So let’s do that:

void I_dont_know_how_string_will_be_used_but_i_am_cool(const string& s);

We turn into:

void I_dont_know_how_string_will_be_used_but_i_am_cool(string_view s);

As users of this function, we are now permitted to pass whatever valid string_view, aren’t we?

As writers of this function, we may have now serious problems.

We have introduced a subtle change to our interface that breaks a sort of guarantee that we had before:  null-termination. string_view does not require (and then does not necessarily handle) a null-terminated sequence. On the other hand, string guarantees to get one back Рwith c_str().

Maybe you don’t need that feature, in this case the rest of the interface should be ok. Otherwise, if you are lucky, your code simply stops compiling because you are using c_str() somewhere in the code. Else, you are using data(), and the code continues compiling just fine because string_view provides data() as well.

This is not a syntactic detail. What should be clear is that the interface of ‘I_dont_know_how_string_will_be_used_but_i_am_cool’ is not seamlessly changed because now the user can just pass in a not null-terminated sequence of characters:

string something = "hello world";
I_dont_know_how_string_will_be_used_but_i_am_cool(string_view{something.data(), 5}); // hello

Suppose at some point you call a C-function expecting a null-terminated string (it’s common), then you call¬†.data() on string_view. What you obtain is “hello world\0” instead of what the user expected (“hello”). In this case, you maybe only get¬†a logical error, because the \0 is at the end of the string. In¬†this other case you are not so lucky:

char buff[] = {'h', 'e', 'l', 'l', 'o'};
I_dont_know_how_string_will_be_used_but_i_am_cool(string_view{buff, 5});

Even if uncommon (generally string_view refers to real strings, that are always null-terminated), that’s even worse, isn’t it?

In general,¬†string_view “relaxes” (does not have) that requirement on null-termination (it’s just a wrapper on¬†const char*).¬†Imagine that the DNA, the identity,¬†of string_view is made of¬†both the¬†pointer to the sequence of characters and the¬†number of referred characters (the length of the span).¬†On the other hand, since string::c_str() guarantees that the returned sequence of characters is null-terminated, you can think that the identity of a string is just what¬†c_str()¬†returns – the length is a redundant information¬†(e.g. computable by strlen(str.c_str())).

To conclude this point, replacing const string& with string_view is safe as far as you don’t expect a null-terminated string – if you¬†are using c_str() then you can figure that out at compile time because the code simply not compile, otherwise you are possibly in trouble.

Since we are on the subject:¬†replacing const string& with string_view has also another (minor) consequence because string_view involves some work, that is copying a pointer and a length. The latter is an extra, compared to const string&. That’s just theory. In practice you measure when in doubt.

#3: string = string_view::data() + string_view::size()

From the previous point, it should be evident that wherever you need to create a string from a string_view you have to use both data() and size(), and not only data(). You have to use the DNA of string_view. I have reviewed this error many times:

string_view sv = ...;
string s = sv.data(); // possibly UB

It does not work in general, for the same reasons I have just showed you (e.g. this constructor of std::string requires a null-terminated sequence of characters).

From C++17 you can just use one of string’s constructors:

string s { sv };

Before C++17, we have to use data() + size():

string s { sv.data(), sv.size() };

Clearly, as for std::string, you have to do the same for other string types. E.g.:

CString cstr { sv.data(), sv.size() };
#4: numerical conversions

Although C and C++ provide¬†many¬†functions to perform conversions between a number and a string/C-string¬†(and viceversa), none supports¬†a range of characters (e.g. begin + end, or begin + length). Moreover, every C/C++ conversion function expects the input string to be null-terminated. These facts lead to the conclusion¬†that¬†it does not exist any function able¬†to convert a string_view into a number¬†out of the box. We can use some C/C++ functions, but we have limitations. I’ll show you some in this section.

For instance, using atoi or C++11 functions we fall into traps or undefined behavior:

string whole = "1234987";
string_view s { whole.data(), 4 }; // 1234
auto i = atoi(s.data()) // ooops...1234987

So, how to properly convert a string_view into a number? Many ways exist, generally motivated by different requirements and compromises. For the rest of this section I’ll refer only to int conversions because the end of the story is similar for other numeric types.

Sometimes, although it seems counterintuitive, to fulfill the null-termination requirement we can create an intermdiate std::string (or char array):

string s { sv.data(), sv.size() }; // pre-C++17
auto i = atoi(s.c_str());

Actually, having a std::string we can rely on any C and C++ conversion function. Such intermdiate step of copying into a std::string is sometimes affordable because certain numeric types Рlike int Рhave a small number of maximum digits (e.g. int is 11). As far as the char sequence really contains one of such little data, the resulting std::string will be created without allocating dynamic memory thanks to SSO (Small String Optimization). Clearly, that shortcut does not hold for bigger numeric types and in general is not portable.

Other fragile solutions I encountered were based on sscanf and friends:

int to_int(string_view sv)
{
char formatter[24] = {};
sprintf(formatter, "%%%zud%%n", sv.size());
int num, n{0};
if (sscanf_s(sv.data(), formatter, &num, &n)==1 && n==sv.size()) // could emit a warning because sv.size() is unsigned
return num;
else // (some) error handling
}

In some cases this code does not behave how we expect – e.g.¬†when the converted value overflows and¬†when the sequence contains leading whitespaces. Although I don’t recommend this approach, compared to the previous one, it only allocates a fixed amount of characters (e.g. 24) on the stack.

In many other cases, the approach¬†is¬†strictly based on how string_view is employed. This means that we have to make some assumptions. For example, suppose we¬†write a¬†parser for urls where¬†we assume that each token is¬†separated by ‘/’. Since atoi and strtol¬†stops on the last character interpreted, if the whole url is both well-formed and stored into a null-terminated string¬†(assumptions/preconditions) we can use such functions quite safely:

string url("website/number1/1/number2/2"); // certainly null-terminated
string_view tokens = split(url, "/");
auto number1AsInt = strtol(tokens[2].data(), nullptr, 10); // 1

Basically, we assumed that the character past the end of any string_view is either a delimiter or the null-terminator. Pragmatically, many times we can make such assumptions, even if they distance our solution from genericity.

So, I encountered code like that:

string_view parse_int(string_view sv, int& i)
{
char* endPtr = nullptr;
i = strtol(sv.data(), &endPtr, 10);
return sv.substr(endPtr-sv.begin());
}

In this example we use strtol to read an int and then we return the rest of the string_view. We basically try to “consume” ¬†an int from the beginning of the string_view.

Note that C and C++ conversion functions have more or less relaxed policies on errors (mainly for performance reasons). For instance, if the conversion cannot be performed, strtol returns 0 and if the representation overflows, it sets errno to ERANGE. Instead, in the latter case the return value of atoi is undefined. What I really mean is that if you decide to use such functions then you are going to accept the consequences of their limitations. So, just pay attention to such limitations and take actions if needed. For example, a more defensive version of the previous code is:

string_view parse_int(string_view sv, int& i)
{
if(sv.empty() || ((!isdigit(sv[0])) && (sv[0] != '-') && (sv[0] != '+')))
// handle error
char* endPtr = nullptr;
i = strtol(sv.data(), &endPtr, 10);
if (*endPtr != 0) // assumption
// handle error
return sv.substr(endPtr-sv.begin());
}

The fact that it makes sense to check against the null-terminator (if (*entrPtr != 0)) is the fundamental assumption we made here. Generally such assumption is easy to make. Scenarios like this, instead:

string whole = "12345";
parse_int ( {whole.data(), 3}, i );

are still not covered, because the length of the string_view is not¬†taken into account. For this,¬†we have at least three options: create and use an intermdiate std::string (or use a std::stringstream – however¬†only std::string benefits¬†the SSO), improve¬†the sscanf-based solution that somehow uses¬†such information, or write a conversion function manually. It’s quite clear that C++¬†lacks a¬†set of simple functions to convert char ranges to numbers easily, efficiently and with a robust¬†error handling.

Actually, I think the most elegant, robust and generic solution is based on boost::spirit:

int to_int(string_view src)
{
int dest;
if (parse(cbegin(src), cend(src), int_, dest))
return dest;
// error handling
}

However, if you don’t already depend on boost, it’s quite inconvenient to do just for converting strings into¬†numbers.

We have a happy ending, though. Finally, C++17 fills this gap by introducing elementary string conversion functions:

string_view sv = ...
int num;
auto res = from_chars(sv.data(), next(sv.data(), sv.size()), num);

This new function will just convert the given range of characters into an integer. It is locale-independent, non-allocating, and non-throwing. Only a small subset of parsing policies used by other libraries (such as sscanf) is provided. This is intended to allow the fastest possible implementation. Clearly, overloads for other numeric types are provided by the standard.

To be thorough, here is an example of the opposite operation, using to_chars:

char arr[5] {};
auto value1 = 10, value2 = 20;
auto ptrStart = arr; auto ptrEnd = arr + 5;
auto res = to_chars(ptrStart, ptrEnd, value1);
if (!res.overflow) // fitted the buffer?
res = to_chars(res.ptr, ptrEnd, value2);
// [‚Äė1‚Äô, ‚Äė0‚Äô, ‚Äė2‚Äô, ‚Äė0‚Äô, \0]

Both to_chars and from_chars return a minimal output which contains an error flag and a pointer to the first character at which the parsing stopped (e.g. something like what is written into endPtr in the strtol example).

Are you already looking forward to putting your hands on them?!

 

Here is wrap-up of the main points we covered in this post:

  • string_view is a smart const char*: an object that refers to a constant sequence of characters, keeps track of its length and provides any¬†const function of std::string;
  • just like a reference or a pointer, you have to¬†pay attention to storing string_view as a member variable;
  • string_view’s¬†DNA is both¬†the¬†char sequence and the¬†length:
    • the pointed sequence of characters is not necessarily null-terminated (e.g. c_str() does not exist);
    • whenever you need to copy the content of a string_view into a¬†string(-like container), you have to use both;
  • bear in mind that replacing const string& with string_view implies¬†the user can start passing not null-terminated strings into your functions (just ask yourself if¬†that makes sense);
  • To convert a string_view into a number:
    • pre-C++17: use boost::spirit if you can, agree to compromises and use¬†C/C++ functions with their limitations, or roll some utilities¬†yourself;
    • since C++17: use from_chars.
  • string_view is already available in:
    • Microsoft Visual Studio 2017 RC
    • clang HEAD 4.0 (or in 3.8, under the experimental include folder)
    • gcc HEAD 7.0

The last few weeks were¬†positively¬†demanding for me…

At the end of October we organized the C++ Day, an event entirely dedicated to C++, made by the Italian C++ Community that I lead and coordinate. If you feel like reading some lines about the event, have a look here. It was a great day!

C++ Day

C++ Day 2016 ~ Florence

Some days after, I left for Seattle, to attend the Microsoft MVP Summit at the Microsoft Campus in Redmond. An awesome experience!

14947822_10154149337022149_3563183436919340327_n

Italian Microsoft MVPs at the MVP Global Summit 2016

Casually, the ISO C++ Standard meeting was happening¬†exactly¬†the same week I was in Redmond. I couldn’t miss it! Then, at the end of the Summit, a few other MVPs (like¬†Marius Bancila¬†and Raffaele Rialdi) and I went to Issaquah to attend the meeting half a day. The game was afoot.

The experience was really¬†amazing. First of all we added our¬†names on the attendance sheet to get our¬†names immortalized in the minutes ¬†ūüôā That’s was suggested by Herb, who was very¬†kind with all of us.

issaquah-sutter

From left: Marco Arena, Herb Sutter, Raffaele Rialdi, Marius Bancila

Then I am glad we met many members of the committee. They were really gentle and welcoming.

I think attending one of such meetings is a must for whoever cares about the C++ language and also wants to understand how things are discussed and evolved.

You probably know that the committee is¬†divided into a few working and study groups –¬†WG and SG. The working groups are Evolution, Library Evolution, Library, and Core. We were sitting in the Evolution Working Group (EWG), where we¬†heard¬†discussions about a few proposals for C++17 and C++20.

A proposal presentation starts with the author(s) of a proposal defending the idea, going through the paper¬†and showing examples.¬†For what I attended, that part was quick (10′) and¬†other people¬†eventually interrupted only for little¬†clarifications.

Then the discussion starts. It is coordinated by a guy who goes around the room and moderates the discussion. Each member who wants to say something just raises her hand and politely waits to take the floor. Too many times I attend meetings where people just interrupt others. That was exactly the opposite!

Speaking with some guys of the committee, I discovered that some (crucial) discussions are instead plenary (they involve the whole committee and not only a certain working group) and they take place in another Рbigger Рroom.

The discussions I was present in ended with¬†a poll. Something like “how many people agree? How many¬†disagree? How many strongly disagree? Etc.”. It also happened¬†that a discussion was¬†simply postponed because the co-author – Bjarne Stroustrup – was not there.

Each proposal is deeply inspected by bringing out lots of detail, counter examples and observations. That part was the most instructive for me.

On that point, I realized that one thing is particularly vital for the committee: heterogeneity. People have a different background/interests and they use C++ in different ways. The details that come out from discussions reflect this heterogeneity. Without that we would lose many details and observations.

For example, at some point Peter Sommerlad took the floor and¬†asked something like “so, if we accept this proposal we should start teaching people to stop doing¬†X”. Peter made¬†that observation because he is a professor and¬†his point of view is often influenced by his main job.

Other examples were concerns about legacy and old code, which a certain proposal could break under some circumstances that a few people were working on daily. Interesting also the observations made by compiler implementers, because they often already see how complicated would be to code a certain new C++ feature.

The experience was definitely worth it. Thanks to Herb and Andrew Pardoe for their hospitality.

Sometimes such meetings¬†take place in Europe so if you cannot go to USA then just wait for one happening on¬†this side of the World and¬†attend!¬†I’ll do it again.

The week next I went to Berlin for Meeting C++ 2016. I am happy I was part of the staff. There I had the opportunity to meet Bjarne Stroustrup and to dine with him and  with other special people.

with-bjarne

From Left: Michele Adduci, Valentino Picotti, Marco Arena, Bjarne Stroustrup, Jens Weller, Gian Lorenzo Meocci, Marco Foco

This amazing experience concludes my “C++ & friends weeks”.

My short-term plans constist mostly in blogging – I want to write¬†a new “C++ in Competitive Programming” installment, as well as a couple of posts I have had in mind for months – and planning¬†next events and activities. In 2017 my spare time won’t be¬†more than in 2016 but I hope to be more active.

In June I want to make a new C++ event in Italy. If you feel like supporting/sponsoring/helping please get in touch with me.

Crafting software is about balancing competing trade-offs. It’s impossible to optimize every factor¬†of a¬†system, as speed, usability, accuracy, etc at the same time. Moreover, solutions of today¬†impact decisions and solutions of tomorrow. On the other hand, in Competitive Programming, the best solution is one that just makes¬†each test-case pass.

However, since each challenge has specific requirements and constraints, we can still imagine it as a small and simplified component of a bigger system. Indeed, a challenge specifies:

  • the input format and the constraints of the problem;
  • the expected output;
  • the environment constraints (e.g.¬†the solution should take less than 1s);
  • [optional] amount of time available to solve the challenge (e.g. 1 hour).

Questions like: “can I¬†use a quadratic¬†solution?” or “may I use extra space?” will be answered only by taking into account all the problem information.¬†Balancing competing trade-offs is a key in Competitive Programming too.

This post is a bit philosphical, it’s about what I consider the essence of Competitive Programming and why a professional could benefit from it. Since February of this year I’ve been¬†monthly organizing “coding dojos” on these themes, here in Italy. After some months,¬†I’m seeing good results in people regularly joining my dojos and practicing at home.¬†It’s likely that in the future I’ll dedicate¬†a post about how dojos are organized, but if you want more information now, please get in touch.

Follow me through this discussion and I’ll¬†explain you some apparently hidden values of Competitive Programming and why I think they are precious for everyone.

Let me start by stating¬†that the only real compromise in Competitive Programming is one that makes¬†an¬†acceptable¬†solution. However, we distinguish two distinct “coding modes”: competition and practice. Competition is about¬†being both fast and accurate at the same time. You just have to correctly solve all the¬†challenges as fast as possible. On the other hand, practice mode is the¬†opportunity to¬†take time and think more in deep about compromises, different approaches and paradigms, scalability, etc. In practice mode we can also explore the¬†usage of standard¬†algorithms. A story about that concept: at¬†one of my dojo I proposed this¬†warm-up exercise:

Given two arrays of numbers A and B, for i from 0 up to N-1, count how many times A[i] > B[i].

For example:

A = [10, 2, 4, 6, 1]
B = [3, 2, 3, 10, 0]

The answer is 3 (10>3, 4>3, 1>0).

This exercise was trivial and people just solved it by looping and counting.

During the retrospective I asked people: “can you solve this problem without looping¬†explicitly?”. I lowered my aim: “Suppose I provide you a function which processes the¬†arrays pair-wise”.¬†One guy¬†plucked up courage and replied ” like ZIP?”. I said “Yes!”. I dashed off the blackboard and showed the idea.

The C++ algorithm to use in this case was¬†std::inner_product, that we already met in the very first post of this series. I’ll get back to inner_product again in the future, meanwhile here¬†is the slick solution to this problem:

cout << inner_product(begin(A), end(A), begin(B), plus<>{}, greater<>{});

As we already saw, inner_product is kind of a zip_and_reduce function. Each pair is processed by the second function (greater) and results are sequentially accumulated by using the first one (plus). People realized the value of applying a known pattern to the problem. We discussed also about how easy it is to change the processing functions, getting different behaviors by maintaining the same core algorithm.

In this post I’ll show¬†different scenarios¬†by¬†turning on “practice mode” and “competition mode” separately, discussing the different ways to approach¬†problems and care about compromises in both.

Consider the problem we tackled¬†in the previous post: the “most common word”. Since we used std::map, the¬†solution was¬†generic¬†enough to work with¬†types other than strings with a little effort. For example, given¬†an array of integers, we can find¬†the mode (as statisticians call it) of the sequence.

Imagine this solution still fits the requirements of the challenge.

In¬†“competition mode”, we stay with this solution, just because it works. The pragmatic competitive coder¬†was lucky because she¬†already had a solution on hand, so she¬†just exults and lands¬†to the next¬†challenge of the contest.

Now, let’s switch¬†“practice mode” on. First of all, we¬†notice two interesting¬†constraints of the challenge: 1) the elements of the array belong to the range 0-1’000; 2) the size of the sequence is within 100’000. Often, constraints hide shortcuts and simplifications.

Ranting about life in general and not only about programming, I use to say “constraints hide opportunities”.

In Competitive Programming this means that whenever we¬†see particular¬†sentences like “assume no duplicates exist” or “no negative values will be given”,¬†it’s very likely that these information should be used somehow.

Sometimes “very likely” becomes a command and we won’t¬†have other options to solve the challenge: ¬†we¬†have to craft a solution which exploits¬†the constraints of the problem. Such challenges are interesting since¬†they require¬†us to find a way to cut off¬†the space of all the possible solutions.

Anyway, back to the mode problem, since we know¬†the domain¬†is 0-1’000, we can just create¬†a histogram (or¬†frequency table) and literally count the occurrences of each number:

int mode(const std::vector<int>& v)
{
// or std::array, if it fits the stack
std::vector<int> histogram(1'000+1); // we know v[i] is within [0, 10^3]
for (auto i : v)
histogram[i]++;
auto maxElem = std::max_element(begin(histogram), end(histogram));
return std::distance(begin(histogram), maxElem);
}

Although the solution is similar to the previous one (because std::map provides operator[]), this is dramatically faster because of contiguity.

However, we agreed to a compromise: we pay everytime a fixed amount of space to¬†allocate¬†the histrogram (~4 KB) and we¬†support¬†only a fixed domain of ints (the ones within¬†0-1’000). That’s ok for this challenge, but it could not¬†be the case in¬†other cases.¬†Balancing speed, speed and genericity is¬†something to think about in practice mode. In¬†my coding dojos in Italy we use to discuss about¬†such things at the end of each challenge, during a¬†15-min retrospective.

At this point we need to answer a related question: “are we always permitted to allocate that extra-space?”. Well, it depends. Speaking about stack allocations¬†– or allocations known “at compile-time” – compilers¬†have their own limits (that¬†can be tuned), generally in order of a few MB.

The story is different for dynamic allocations. On websites like HackerRank we may have a look at the environment page which contains the limits for each language. For C++, we are usually allowed to allocate up to 512 MB. Other websites can limit that amount of space, typically per challenge. In these situations, we have to be creative and find smarter solutions.

Take¬†InterviewBit¬†as an example –¬†a website focused on programming interviews. A challenge states: “given a read only array of n + 1 integers between 1 and n, find one number that repeats, in linear time using constant¬†space and traversing the stream sequentially”. It’s clear that we cannot use an auxiliary data structure, so forget maps or arrays.¬†A¬†solution consists in applying a “fast-slow pointer strategy” – so does¬†Floyd‚Äôs algorithm. I won’t show the solution because very soon we’ll have a guest post by Davide Di Gennaro who will show how to apply¬†Floyd to solve another – more realistic – problem.

Space and¬†time have a special connection. The rule of thumb is that¬†we can set up¬†faster solutions at the price of using more space, and viceversa. It’s another compromise. Sorting algorithms are an example: some people amaze when they discover that, under certain¬†conditions,¬†we can sort arrays in linear time. For example, you know that counting sort can¬†perform very well¬†when we have¬†to sort very long sequences which contains values in a small range – like a long string of lowercase alphabetic¬†letters.

Counting sort allocates an extra frequecy table (a histogram, as we have just seen), containing the frequencies of the values in the range. It exhibits linear time and space complexity that is O(N+K), where N is the length of the array to sort and K is the length of the domain. However, what if the elements go from 1 to N^2? Counting sort exhibits now quadratic complexity! If we need to care about this problem, we can try adoperating another algorithm, like Radix sort, which basically sorts data with integer keys by grouping keys by the individual digits which share the same significant position and value.

Radix sort has also other limitations,¬†but I won’t go into details because it’s not the point¬†of this article. Again, I just want to highlight¬†that other decisions have to be taken, depending on the requirements and the constraints of the problem. A general rule is that often¬†we¬†can exchange space for making faster solutions. Space and time¬†go hand in hand.

Time is constrained as well: typically 1 or 2 seconds on HackerRank.¬†To give you some feeling about what it means, let’s write down some time¬†evidences measured on HackerRank’s environment.

std::accumulate – O(n) – on vector<int>:

100’000 elements: 65 microseconds

1’000’000 elements: 678 microseconds

10’000’000 elements: 6973 microseconds (6.973 milliseconds)

However, don’t be swayed by complexity. Time is also affected by other factors. For example, let’s try accumulating¬†on¬†std::set:

100’000 elements: 747¬†microseconds

1’000’000 elements: 16063¬†microseconds (16.063 milliseconds)

10’000’000 elements: timeout (>2 seconds)

You see that contiguity and locality make a huge difference.

Imagine now what happens if we passed from N to N^2. Roughly speaking, we can just square the time needed to accumulate a¬†vector of¬†100’000 elements to get¬†an approximation of what would¬†happen: we obtain¬†~4.5 milliseconds. If¬†the challenge¬†expects at most 1 second, we are able to¬†afford it.

Should we submit such a quadratic solution?

Again, we may in competition mode. Instead we should think about it in practice mode. What if the input grows? Moreover, special contests run extra tests once a day, generally stressing the solution by pushing inputs to the limit. In this case, we need to understand very carefully our constraints and limits.

Compromises also¬†originate from¬†the format of input. For example, if we are given a sorted sequence of elements,¬†we are¬†enabled to¬†binary-search that sequence for free (that is, we don’t need to sort the sequence ourselves). We can code¬†even more optimized searches if the sequence has also some special properties. For instance,¬†if the sequence represents a time series containing samples of a 10-sec acquisition, at¬†100 Hz, we know precisely – and in constant time – where the sample of the 3rd second is. It’s like accessing an array by index.

The more we¬†specialize (and optimize) a¬†solution, the more we¬†lose genericity. It’s just another compromise, another subject¬†to discuss.

Compromises are also about types. In previous posts we have¬†faced with problems using¬†integer values and we have assumed that int was fine. It’s not always the case.¬†For example, many times problems have 32bit integer inputs and wider¬†outputs – which¬†overflow 32bit.¬†Sometimes the results do not even fit¬†64bit ints. Since we don’t have “big integers” in C++, we have to design such facility ourselves or…switch to another language. It’s not a joke: in¬†“competition mode”, we just use another language if possible. It’s quicker.

Instead, in “practice mode” we can¬†design/take from an external¬†source¬†a big integer class, for¬†being reused¬†in “competition mode”. This aspect is particularly important: when I asked to some¬†top coders their thoughts on how to constantly improve and do better, their¬†shared advice was to solve¬†problems and refine a support¬†library. Indeed, top levels competitive coders have¬†their¬†snippets on hand during¬†contests.

This is a simplistic¬†approximation¬†to what we actually do as¬†Software Engineers: we solve problems first of all by trying to directly use¬†existing/tested/experienced solutions (e.g. libraries, system components). If it’s not the case, we have two options: trying to adapt¬†an existing¬†solution to embrace the new scenario, or writing a new solution (“writing” may mean also¬†“introducing a new library”, “using some new system”, etc). Afterwards,¬†we can take some time to understand if it’s possible to merge this¬†solution to the others we already have, maybe with some work (we generally call it¬†refactoring). You know, the process is more complicated,¬†but the core¬†is that.

Using or not an existing solution is a compromise. Mostly because the adapting and refactoring parts require time and resources.

I think¬†everyone would love writing the most efficient solution that is¬†also the simplest. Generally it’s not the case.

However¬†it happens in¬†many¬†challenges¬†– especially in the ones of difficulty Easy/Medium – that “average solutions” are accepted as well (these solutions work for the challenge, even if they¬†are not the “best” – in terms of speed, space, etc).

The balance between the difficulty of a solution and its effectiveness is another compromise to take into account. About that, I tell you a story I lived a few days ago at my monthly coding dojo. We had this exercise:

Given a string S, consisting of N lowercase English letters, reduce S as much as possible by deleting any pair of adjacent letters with the same value. N is at most 100.

An example:

aabccbdea

becomes “dea”, since we¬†can first delete “aa”getting:

bccbdea

Then we delete “cc” and get: “bbdea”, finally we remove “bb” and get “dea”.

The attendees¬†proposed a few¬†solutions, none of which exhibited¬†linear time and constant space complexity. Such a solution¬†exists, but it’s a bit complicated. I proposed that¬†challenge because I wanted people to reason about the compromises and, indeed, I made one point very clear: the maximum size of the string is 100. It’s little.

We had two leading solutions: one quadratic and one linear in both time and space. The quadratic one could be written in a few lines by using adjacent_find:

string S; cin >> S;
auto finder = [&]{
return adjacent_find(begin(S), end(S));
};
for (auto adjFind = finder(); adjFind != end(S); adjFind = finder())
{
S.erase(adjFind, adjFind + 2);
}
cout << (S.empty() ? "Empty String" : S);

The other one adoperated a deque (actually, I have seen such solution applied many times in other contexts):

string S; cin >> S;
deque<char> st;
for (auto c : S)
{
if (!st.empty() && st.back() == c)
st.pop_back();
else
st.push_back(c);
}
if (st.empty())
cout << "Empty String";
else
{
for (auto c : st)
cout << c;
}

Both passed.¬†Afterwards, we discussed about the¬†compromises of these solutions and we understood limits of both. In a¬†more advanced dojo, I’ll propose to solve this challenge¬†in linear time and constant space!

Competitive Programming gives us the opportunity to see with our own eyes the impact of self-contained pieces of code, first of all in terms of time spent and space allocated. As programmers, we are forced to do predictions and estimatations.

Estimating and predicting time/space is¬†fundamental to¬†decide quickly which solution is the most viable¬†when different compromises cross our¬†path. Back to the¬†mode example, using std::map or std::vector¬†makes a huge difference. However, when the domain is small we are allowed just not to care about that difference. Somehow, this happens also in ordinary software development. For instance, how many times¬†do¬†we use streams in production code? Sometimes? Never? Often? I think the most fair answer would be “it depends”. Streams are high-level abstractions, so they should be relatively easy to use. However, they are generally slower than C¬†functions. On the other hand, C functions are low-level facilities, more error prone and less flexible. What to do entirely depends on our¬†needs and possibilities, and very often also on the attitude of us¬†and our¬†company.

Although Competitive Programming offers a simplified and approximated reality, facing a problem “as-is” it’s¬†an¬†opportunity to understand compromises, estimate time and space, adapt known solutions/patterns to new scenarios, learn something completely new, think outside the box.

The latter point is related to¬†another important fact that I see with my own eyes¬†during my coding dojos¬†in Italy: young non-professional people often find clever solutions quicker than experienced programmers.¬†The reason is complex and I have¬†discussed about that also with psychologists. To put it in simple terms, this happens because young people¬†generally¬†have a “weaker mindset”,¬†have less experience and are not afraid of failure. The latter point is important as well: many professionals are terribly afraid¬†of¬†failures and of¬†making a bad impression. Thus, an experienced solution that worked seems more reliable than something completely new.¬†That’s basically a human behavior. The problem comes up¬†when the experienced programmer cannot think outside the box and is not able to¬†find new and creative ways to solve a problem.

Competitive Programming is a quick and self-contained way to practice pushing our “coding mind” to the limits.¬†As a Software Engineer, programmer and professional, I find it precious.

Competitive Programming offers both a “practice mode”, during which I can “stop and think”, reasoning about compromises, time, space, etc. And a “competition mode”, where I can “stress-test” my skills and experience.

Learning¬†new things is often mandatory. This¬†needs time, will and patience. But you know, it’s a compromise…

Recap for the pragmatic C++ competitive coder:

  • Compromises in¬†Competitive Programming have different shapes:
    • Dimension and format of the input;
    • Time and space limits;
    • Data types;
    • Adaptability of a well-known solution;
    • Simplicity of the solution.
  • The essence of Competitive Programming consists in¬†two phases:
    • Competition: the solution has just to work;
      • Top coders generally take on hand¬†snippets and functions to¬†use during contests.
    • Practice: deeply understanding compromises, variations and different implementations of the¬†solution;
      • Top coders generally refine their snippets and functions.
  • A challenge may have simplifications and shortcuts:
    • The more we use those, the less generic our solution will be;
    • Hopefully, the more we use those, the more optimized (and/or simple) our solution will be;
    • Many challenges require¬†us¬†to find¬†shortcuts in¬†the problem constraints and description.