## C++ in Competitive Programming: warmup

Posted: April 14, 2016 in Competitive Programming
Tags: , , , , , ,

One of the first challenges in the HackerRank‘s “Warmup” section is probably the “Hello World” of learning arrays in any language: calculating the sum of a sequence of elements. Although this exercise is trivial, I’ll face with it to break the ice and show you a few concepts that lay the groundwork for more complicated challenges.

I’m assuming you are already familiar with concepts like iterator, container and algorithm. Most of the time I’ll give hints for using these C++ tools effectively in Competitive Programming.

That’s the problem specification: You are given an array of integers of size N. Can you find the sum of the elements in the array? It’s guaranteed the sum won’t overflow the int32 representation.

First of all, we need an “array of size N”, where N is given at runtime. The C++ STL (Standard Template Library) provides many useful and cleverly designed data structures (containers) we don’t need to reinvent. Sometimes more complicated challenges require us to write them from scratch. Advanced exercises reveal less common data structures that cannot be light-heartedly included into the STL. We’ll deal with some examples in the future.

It’s not our case here. The primary lesson of this post is: don’t reinvent the wheel. Many times standard containers fit your needs, especially the simplest one: std::vector, basically a dynamic sequence of contiguous elements:

For the purpose of this basic post, here is a list of important things to remember about std::vector:

• it’s guaranteed to store elements contiguously, so our cache will love it;
• elements can be accessed through iterators, using offsets on regular pointers to elements, using the subscript operator (e.g. v[index]) and with convenient member functions (e.g. at, front, back);
• it manages its size automatically: it can enlarge as needed. The real capacity of the vector is usually different from its length (size, in STL speaking);
• enlarging that capacity can be done explicitly by using reserve member function, that is the standard way to gently order to the vector: “get ready for accommodating N elements”;
• adding a new element at the end of the vector (push_back/emplace_back) may not cause relocation as far as the internal storage can accommodate this extra element (that is: vector.size() + 1 <= vector.capacity());
• on the other hand, adding (not overwriting) an entry to any other position requires to relocate the vector (eventually in the same block of memory, if the capacity allows that), since the contiguity has to be guaranteed;
• the previous point means that inserting an element at the end is generally faster than inserting it at any other position (for this reason std::vector provides push_back, emplace_back and pop_back member functions);
• knowing in advance the number of elements to store is an information that can be exploited by applying the synergic duo reserve + push_back (or emplace_back).

The latter point leads to an important pattern: inserting at the end is O(1) as far as the vector capacity can accommodate the extra element – vector.size() + 1 <= vector.capacity(). You may ask: why not enlarging the vector first and then just assign values? We can do that by calling resize:

resize enlarges the vector up to N elements. The new elements must be initialized to some value, or to the default one – as in this case. This additional work does not matter in this challenge, however initialization may – in general – cause some overhead (read, for example, these thoughts by Thomas Young). As a reader pointed out on reddit, push_back hides a branching logic that can cause some cost. For this reason he suggests that two sequential passes over the data (that is contigous) may be faster. I think this can be true especially for small data, however the classical recommendation is to profile your code in case of such questions. In my opinion getting into the habit of using reserve + *_back is better and potentially faster in general cases.

The heart of the matter is: need a dynamic array? Consider std::vector. In competitive programming std::vector is 99% of the time the best replacement for a dynamic C-like array (e.g. T* or T**). 1% is due to more advanced challenges requiring us to design different kind of dynamic arrays that break some std::vector’s guarantees to gain some domain-specific performance. Replacing std::vector with custom optimized containers is more common in real-life code (to have an idea, give a look for example here, here and here).

If N was given at compile-time, a static array could be better (as far as N is small – say less than one million – otherwise we get a stack overflow). For this purpose, std::array is our friend – basically a richer replacement of T[]. “Richer replacement” means that std::array is the STL-adaptation of a C-array. It provides member functions we generally find in STL containers like .size(), .at(), .begin()/.end(). std::array combines the performance and accessibility of a C-style array with the benefits of a standard container. Just use it.

Since much information is stated in the problem’s requirements, we’ll see that static-sized arrays are extraordinarily useful in competitive programming. In the future I’ll spend some time about this topic.

Now, let’s look at my snippet again: can we do better? Of course we can (from my previous post):

At this point we have the vector filled and we need to compute the sum of the elements. A hand-made for loop could do that:

Can we do better?

Sure, by using the first numeric algorithm of this series: ladies and gentlemen, please welcome std::accumulate:

One of the most important loops in programming is one that adds a range of things together. This abstraction is known as reduction or fold. In C++, reduction is mimicked by std::accumulate. Basically, it accumulates elements from left to right by applying a binary operation:

accumulate with three parameters uses operator+ as binary operation.

std::accumulate guarantees:

• the order of evaluation is left to right (known also as left fold), and
• the time complexity is linear in the length of the range, and
• if the range is empty, the initial value is returned (that’s why we have to provide one).

The reduction function appears in this idiomatic form:

So the result type may be different from the underlying type of the range (ElementType). For example, given a vector of const char*, here is a simple way to calculate the length of the longest string by using std::accumulate (credits to Davide Di Gennaro for having suggested this example):

To accumulate from the right (known as right fold) we just us reverse iterators:

Right fold makes some difference – for example – when a non-associative function (e.g. subtraction) is used.

In functional programming fold is very generic and can be used to implement other operations. In this great article, Paul Keir describes how to get the equivalent results in C++ by accommodating std::accumulate.

Does std::accumulate have any pitfalls? There exist cases where a+=b is better than a = a + b (the latter is what std::accumulate does in the for loop). Although hacks are doable, I think if you fall into such scenarios, a for loop would be the simplest and the most effective option.

Here is another example of using std::accumulate to multiply the elements of a sequence:

std::multiplies<> is a standard function object (find others here).

Using standard function objects makes the usage of algorithms more succinct. For example, the problem of finding the missing number from an array of integers states: given an array of N integers called “baseline” and another array of N-1 integers called “actual”, find the number that exists in “baseline” but not in “actual”. Duplicates may exist. (this problem is a generalization of the “find the missing number” problem, where the first array is actually a range from 0 to N and a clever solution is to apply the famous Gauss’ formula N(N+1)/2 and subtracting this value from the sum of the elements “actual”). An example:

The missing number is 2.

A simple linear solution is calculating the sum of both the sequences and then subtracting the results. This way we obtain the missing number. This solution may easily result in integer overflow, that is undefined behavior in C++. Another wiser solution consists in xor-ing the elements of both the arrays and then xoring the results.

Xor is a bitwise operation – it does not “need” new bits – and then it never overflows. To realize how this solution works, remember how the xor works:

Suppose that “a” is the result of xor-ing all the elements but the missing one – basically it’s the result of xor-ing “actual”. Call the missing number “b”. This means that xor-ing “a” with the missing element “b” results in xor-ing together the elements in the “baseline” array. Call the total “c”. We have all the information to find the missing value since “a” ^ “c” is “b”, that is just the missing number. That’s the corresponding succint C++ code:

Let’s go back to the initial challenge. We can do even better.

To realize how, it’s important to get into the habit of thinking in terms of iterators rather than containers. Since standard algorithms work on ranges (pairs of iterators), we don’t need to store input elements into the vector at all:

Advancing by one – using next – is a licit action since the problem rigorously describes what the input looks like. This snippet solves the challenge in a single line, in O(n) time and O(1) space. That’s pretty good. It’s also our first optimization (actually not required) since our solution dropped to O(1) space – using std::vector was O(n).

That’s an example of what I named “standard reasoning” in the introduction of this series. Thinking in terms of standard things like iterators – objects making algorithms separated from containers – is convenient and it should become a habit. Although it seems counterintuitive, from our perspective of C++ coders thinking in terms of iterators is not possible without knowing containers. For example we’ll never use std::find on std::map, instead we’ll call the member function std::map.find(), and the reason is that we know how std::map works. In a future post I’ll show you other examples about this sensible topic.

Our solution leads to ranges naturally:

view::tail takes all the elements starting from the second (again, I skipped the input length), and ranges::istream is a convenient function which generates a range from an input stream (istream_range). If we had needed to skip more elements at the beginning, we would have used view::drop, which removes the first N elements from the front of a source range.

Iterators-based and ranges-based solutions version look very similar, however – as I said in the introduction of this series – iterators are not easy to compose whereas ranges are composable by design. In the future we’ll see examples of solutions that look extremely different because of this fact.

In Competitive Programming these single-pass algorithms are really useful. The STL provides several single-pass algorithms for accomplishing tasks like finding an element in a range, counting occurrences, veryfing some conditions, etc. We’ll see other applications in this series.

In the next post we’ll meet another essential container – std::string – and we’ll see other algorithms.

Recap for the pragmatic C++ competitive coder:

• Don’t reinvent containers whenever standard ones fit your needs:
• Dynamic array? (e.g. int*) Consider std::vector
• Static array? (e.g. int[]) Use std::array
• Prefer standard algorithms to hand-made for loops:
• often more efficient, more correct and consistent,
• more maintainable (a “language in the language”)
• use standard function objects when possible
• use std::accumulate to combine a range of things together
• If customizing an algorithm results in complicated code, write a loop instead
• Think in terms of standard things:
• iterators separate algorithms from containers
• understand containers member functions

## C++ in Competitive Programming: I/O

Posted: March 13, 2016 in Competitive Programming
Tags: ,

Welcome back to my series about Competitive Programming. Here is the introduction in case you missed it.

In this post I’ll explain some common idioms to deal with input and output.

In C++ a simple task like reading an integer from the standard input can be done in different ways: using streams, C functions or OS-dependant calls. The streams model offers a pretty high level interface, but it is generally slower than using native operating system calls. However, in different cases, it is acceptable.

I have solved a lot of challenges and very rarely I had to switch to C functions (e.g. scanf) or turn off the synchronization between C++ streams and standard C streams after each input/output operation (by using std::ios_base::sync_with_stdio). Most of the time the I/O is not the point of the exercise then we can use the convenient streams model. This point seems irrelevant but it brings about simplification, enabling us to write not only simpler but also safer idioms. We’ll see that in some cases the streams interface is enough for basic parsing as well.

Due to the nature of the challenges – generally not focusing on I/O – I remark that the following idioms are pragmatic: they “work here”, but they may not fit a production environment where streams are often avoided like the plague. As I have read in a comment on reddit: “I fear that it may lead some people to prefer quick and functional solutions that work for some cases, neglecting important traits such as scalability, performance or cross-platform compatibility”. I think that solving challenges is about prototyping a solution which has to cover some sensible scenarios cleverly set up by challenge moderators. “Production” is another business. Definitely. Requirements evolve, scalability matters, etc. Being well-versed in Competitive Programming is not enough, sure, but I think that solving challenges is an entertaining way to face new problems that maybe – at some point – you’ll deal with for “real life purposes”.

I repeat a point from the introduction that I consider the most important: competitive programming is guaranteed brain excercise: “on any given day, what you’re asked to do at work may or may not be challenging, or you may be exercising one of the many non-technical skills required of software developers. By practicing regularly with competitive programming problems, you can ensure that the coding part of your brain receives a regular workout. It’s like one of those brain training apps, but for real skills”. In addition to this, other eleven reasons are wisely explained in this nice article which I recommend again.

After this short additional introduction, let me show some code about I/O.

Just for completeness, in the simplest cases you only need to read lonely values like numbers or strings:

```int N, M;
cin >> N >> M;
```

Remember in these challenges you don’t need to validate the input (unless stated otherwise – never found).

In one of the most common scenarios you need to read a sequence of numbers, and you can do it by using the synergetic copy_n + istream_iterator + back_inserter trio. That’s our first idiom. Generally the length of the sequence is passed just before (and the numbers are separated by whitespaces):

reserve prepares the vector to host “length” elements – it’s possibly allocating memory. A note: I’d really prefer writing something like vector<int> sequence(reserve, length) where “reserve” here is just a tag. The same applies for resize, etc. And this would avoid initializer lists overload predominance:

I used copy_n instead of copy because not only is it clearer, but also convenient in case of more input to read (if I used copy then I would need to pass an end-of-stream iterator, but passing istream_iterator<int>() is dangerous because copy goes on iterating until the stream gets invalid).

With ranges the code streamlines:

bounded returns a range of exactly length elements, in this case from the standard input (view::take is another possibility).

Since a default operator>> for vector is not defined, reading a matrix needs manual iteration (here using copy_n is even more convenient):

Remember we don’t want to create a library, because it couldn’t be used to solve challenges. For this reason we don’t introduce an operator>> for vector.

Other cases involve strings, not just words but also lines. Pay attention to getline: as I have recently blogged (for another reason, but the lesson learned was the same), getline is an unformatted function. This means it does not ignore leading delimiters (newline by default). These innocent lines can lead to something you may don’t expect:

```int N; string line;
cin >> N; getline(cin, line);
```

The problem here is that we want to ignore the separator between the int and the line. E.g.:

```10'\n'
a line representing some data
```

Since getline does not skip the leading ‘\n’, it will stop immediately, writing nothing into the string.

This problem is easy to solve by passing std::ws (a manipulator which discards leading whitespaces from an input stream) :

This succint version is valid too:

And here is how the ranges join the party:

In another recurring pattern the input appears in this form:

```N T
a1 a2 ... aN
t1
t2
...
tT
```

N is the length of a sequence of numbers, T is the number of test cases. Generally this kind of challenges require you to operate on the sequence with some kind of clever pre-processing and to use each test-case value (or values) to perform an operation quickly. For example, for each test-case value t, output the sum of the first t elements in the sequence.

Here is the simplest code to use for reading inputs in this scenario:

We’ll meet again this problem later in this series.

More complex input patterns can be handled by combining the examples above.

Just a few lines on output. Obviously, by using std::cout and some convenient standard abstractions like ostream_iterator.

Often the answers are just one number or two. In this case, just send them to the standard output applying the required formatting rules (e.g. space separated):

I generally don’t use endl because it causes a stream flush and this may negatively affect the performance. For this reason I use just “\n” out of habit. In certain situations a printf is probably shorter, but it’s still error prone: not only if you write a wrong format, but also if you forget to update it in case you change a type (e.g. imagine you solve a problem by using int and then you change it to unsigned long long). With streams you don’t need to change anything because they automatically pick the correct operator<< overload.

When you need to print a sequence – like a vector – just use copy + ostream_iterator:

Note that the last separator is printed after the back element. This means extra care is needed to avoid it. For example:

Maybe in C++17 we’ll use the trailblazing ostream_joiner to save the extra line – since the “last” separator is not outputted at all:

Another worthwhile scenario is when you need to print floating point numbers and the output is expected to be an approximation of a certain number of digits. fixed and setprecision are good friends in this case. For example:

will print num1 and num2 with exactly two digits after the comma:

will print:

```1.00 1.24
```

In case you need to print in a loop, it’s a good habit to set stream manipulators once, before the iteration:

I’ll be back on some standard mathematical utilities (e.g. rounding) later in this series.

#### Pushing streams to the limit

Before concluding, it’s worth sharing a couple of “beyond-the-basics” examples. Sometimes just configuring streams options is enough for solving some problems.

The first challenge required to tokenize a string based on given delimiters and to print the obtained tokens to the standard output, one per line. In other languages like Java you could use a StringTokenizer – indeed lots of Java guys use this opportunity on CC websites. Note that more complex challenges where parsing is the main topic does not allow you to adoperate standard things like streams or tokenizers (in other languages) – and they won’t be as efficient as the challenge requires!

This solution is not intended for production. So please don’t post comments like “streams are slow” or “who uses streams to tokenize?”. Here we have a different scenario. This code can be easily plugged into challenges requiring some basic parsing. By default streams skip whitespaces, here we need to skip also other delimiters.

Ranges library provides views like split and delimit for this kind of things:

Anyhow, let me go back to my modest C++14 empty sheet on HackerRank, I have only 30′ to complete the contest and Java boys are submitting two-line solutions…should I roll my own mini-parser?

C++ has a (a bit old-fashioned?) concept called facet to handle specific localization aspects. Basically, each facet corresponds to a precise localization setting that can be tuned. For example, the ctype facet encapsulates character classification features – it can answer questions like “does this char is a space?” or “does this char is a digit?”. One can inherit from a base facet and change default behavior.

For our task we can inherit from ctype<char>, a specialization of std::ctype which encapsulates character classification features for type char. Unlike general-purpose std::ctype, which uses virtual functions, this specialization uses table lookup to classify characters (which is generally faster). Here is a solution to the challenge:

This requires a bit of clarification if you don’t know facets at all: ctype<char> uses a table to associate each char to its “classification” (this table is, indeed, called classification table). The table size is standard (ctype<char>::table_size – at least 256). We just need to set char delimiters to ctype_base::space. All std::istream formatted input functions are required to use std::ctype<char> for character classing during input parsing. For example, istream::operator>> asks to ctype::is if a char is a space (aka a delimiter). Under the hood ctype<char> lookups the internal table.

Don’t worry about the lonely custom_delims allocation, the locale guarantees that the new facet will be deleted as soon as it’s not referenced anymore (facets are reference counted – another possible performance penalty, in addition to virtual function calls).

Although I never use such approach for parsing in my production code, in Competitive Programming it may be acceptable. For example, on HackerRank I submitted this code against a test-case requiring to split a string of 400’000 chars and it took (output included) 0.05s – within the time requirement of that challenge. And this code is easily reusable.

I promised two examples. The other was about number punctuation: given an integer number, print the string representation of the number, comma separated, supporting different comma styles – e.g. US 1000000 is 1,000,000, instead Indian 1000000 becomes 10,00,000. Another time, we may use the standard localization support:

The code is quite self-explanatory. For more details I encourage you to have a look at numpunct facet.

Hope you enjoyed this post because sometimes stream capabilities suffice in such competitions – consider them before rolling your own alternatives.

#### Summary

A brief recap of some weapons to add to our “pragmatic C++ competitive programmer toolkit”:

• Usually I/O is not the point of the exercises, so using streams is fine
• To read a sequence of values into a vector use:
• vector::reserve(N), and
• copy_n + istream_iterator + back_inserter
• To print sequences use copy + ostream_iterator
• To print floating points rounded to the nth-digit use fixed << setprecision(n)
• If parsing is not the point of the challenge, consider localization features

## C++ in Competitive Programming: intro

Posted: February 15, 2016 in Competitive Programming
Tags: , ,

In the last two years I have turned into a Competitive Programming aficionado, mostly active on HackerRank, less on TopCoder and CodingGame. This kind of websites enable coders not only to compete with others, but also to practice by solving several categorized exercises. After you write your program into the browser, they run and test it against some prepared test-cases. In all probability you can use your favorite language because they support pretty much everything, but I consider really precious spending time with other languages as well.

When I started I just wanted to challenge myself and push my brain beyond its limits. I really appreciated that facing with some kind of problems was a good excuse to explore new areas of CS and learn something new. Even if at that time it was not my target, competitive programming is a great way to prepare for technical job interviews.

Speaking in general, there are several reasons to get involved in Competitive Programming. I found a very nice article about that here. I quote the point that was the most important for me when I started: it’s guaranteed brain excercise: “On any given day, what you’re asked to do at work may or may not be challenging, or you may be exercising one of the many non-technical skills required of software developers. By practicing regularly with competitive programming problems, you can ensure that the coding part of your brain receives a regular workout. It’s like one of those brain training apps, but for real skills.”

Regardless of your programming language, spend some time on competitive coding.

#### Why C++ in Competitive Programming?

If you solve on websites like HackerRank a challenge then you may peek into other people solutions. Looking at C++ solutions, I have found a lot of “C++ people” using C++ mostly as C. Many people don’t consider (don’t know?) what the standard offers. On the other hand, I find it natural to face challenges with modern C++ by my side, coding as simply as possible. For example, often I ask myself: may I use an algorithm instead of this for loop? This attitude is worthwhile here as it has been for years in my daily job.

I realize the word modern is overloaded: in the last years we all have probably heard the expression modern C++ everywhere and sometimes it looked like a buzzword. I mean using the full standard toolset of C++, without external libraries, nor over complicated abstractions. Mainly because in competitive programming you don’t have libraries (sometimes you don’t have the standard library neither) and you cannot lose time with too much generalization. Many times the same results fit real life development.

It’s worth noting that I’m not going to explain things like “how to make a linked list”. Excellent algorithms & ds courses/books/series have been developed for this purpose. Rather, expect to see how I used a forward list to solve a problem, where I found necessary a multiset, or when lower_bound saved my life.

I have told to some friends about this series and they suspected that C++ was not the right language for talking about Competitive Programming. Actually I had a look at the top ranked people on websites like TopCoder and HackerRank and C++ was always there (together with Java and Python, mainly). I found another indicative example on Google Code Jam: more of the higher-performing contestants code in C++.

I’m not surprised at all.

Certainly C and C++ are considered the lingua franca of algorithms and data-structures, but I think the main reason is the control C++ offers and its independence of the paradigm:  it does not force you a single “way of doing” to solve a challenge.

My friend Alessandro Vergani has another sensible idea: traditionally C and C++ have a poor standard library compared to newer languages like Java and C#, for this reason C/C++ programmers are at ease with writing everything they need from scratch – and often too much. This means they are fluent with coding things that in other languages are took for granted.

Moreover, I think since lots of participants are students, C++ is very common as a “first language”.

My purpose is solving challenges without reinventing the wheel. This means if I can use standard things to make all the test cases pass, I’ll do it. Obviously, I’ll justify my decisions and I’ll discuss a bit. For example, what about this snippet for pretty printing a non-zero unsigned 32-bit integer N into its binary representation:

It’s simple, but it’s not as efficient as it can be if written in other ways.

Does it suffice to solve the problem?

Maybe.

If so, I would be happy with this simple solution. If not, the challenge is asking me to find better ways to solve the problem.

Challenges are about trade-off. The requirements and the constraints of the problem matter a lot. They are a rough approximation of a real problem. Often you can exploit the constraints of the problem and take a shortcut. This may signify optimizations, or simplification, etc. Very often you have to exploit the constraints of the problem to solve it.

Many times these shortcuts are not so far from real life needs.

#### Standard reasoning

How many times you heard commands like “use standard algorithms” or “use a container”. With luck, we all know the classical resons for doing that: efficiency, correctness, maintainability. Now there is another fact we could care about: the future of our language.

Let me articulate.

Probably the biggest limitation of the STL is the lack of composability. One of the most mind-changing addition to C++17 will be Eric Niebler’s ranges. Ranges are basically a shift from iterators to a superior abstraction. Ranges enable programmers to write fluent and – hopefully – efficient algorithms.

Coding with ranges produces “pipelined” transformations. For example, to generate an infinite list of integers starting at 1, consider only even values, square them, take the first 10, and sum them:

It’s not my intention to talk about ranges and I am sure you rule them better than me (if not, give it a try and refer to the documentation). My point is: Eric’s proposal includes standard combinators like transform and take, and mutating ones like sort and unique. They are essentially the counterparts of those in the STL. Thinking in terms of standard blocks help embrace ranges effectively. If you are used to employ standard functions, you are already confident with several range views and algorithms.

When I can, I’ll try to show my solutions by using ranges as well. Clearly ranges are not part of the standard yet then these solutions are not intended to run on competitive programming sites yet. I’ll adoperate Clang on Visual Studio 2015 for my attempts.

#### Standard flexibility

Many people don’t employ standard algorithms massively. For some of them the reason is that algorithms are often closer to math than to programming – even if programming is math…

Consider the simple task of finding the minimum difference in a sequence of ascending sorted numbers. For example, in this sequence:

```[10, 20, 30, 100, 200, 300, 1000]
```

10 is the minimum absolute difference (e.g. 20-10).

This is an explicit solution:

That solves the problem in a single pass.

The same problem written in Python:

```minDiff = min([r - l for l, r in zip(elems, elems[1:])])
```

That is completely different because it employes a functional block that C++ misses: zip. Basically it combines elements pair-wise from N sequences (two in this case).

Look more closely at the C++ solution.

It’s basically an accumulation, right? In each iteration we calculate the difference between two adjacent elements and then we update the global minimum. But it’s not a std::accumulate because that function cannot operate on two sequences at the same time – at least without special iterators.

Any idea?

I am sure you found it. Your standard friend is inner_product:

We’ll turn back to inner_product later in this series because it’s powerful, but just to recap: the simplest overload of inner_product returns the result of accumulating a starting value with the inner products of the pairs formed by the elements of two ranges. That is:

```init += range1[0]*range2[0] + range1[1]*range2[1] + ... + range1[N-1]*range2[N-1]
```

There is more: both sum (+) and multiplication (*) can be customized. In my example, I replaced accumulation with min and product with absolute difference. I also set the initial value to the maximum possible int value – in math I would set it to infinity, the identity of the minimum operation (e.g. min(infinity, value) is value). Note also that by passing [begin, end-1] and [begin+1, end] I basically obtained these “pairs”:

```range[0], range[1]
range[1], range[2]
...
range[N-2], range[N-1]
```

inner_product has a name closer to math than to CS, but it is actually a general function, that hides a rich flexibility. When I say flexibility I am not saying composability. Flexibility means that the same function can be adapted to different scenarios. Standard functions generally have a great flexibility.

But flexibility can lead to misuses…

Suppose after we calculate the minimum difference we want to output the pairs of elements with that difference. Recalling the first example:

```[10, 20, 30, 100, 200, 300, 1000]
```

The minimum difference is 10 and the pairs of elements differing by this value are:

```10, 20
20, 30
```

So basically we need to iterate over adjacent elements and print them if their difference is 10:

```[20-10, 30-20, 100-30, ...]
```

Is there any standard algorithm for this purpose?

std::transform is really close, but it does not really perform what we need mainly for two reasons: first of all, it does not guarantee in-order application of the custom function – or in-order iteration – and second, transformation is N to N, that is each element in the source gets mapped into another thing.

Ideally we need something like a zip and a copy_if or copy_if and a special iterator type wrapping two adjacent elements at the same time – things that are, for example, in boost. For this reason, I think a for loop is the best choice in “bare” C++:

Looking at this code, some C++ hackers may force std::equal to do the same thing, getting some sugar:

The sugar is given by not accessing the vector and not writing the for loop explicitly.

The problem is that the snippet is unclear and hackish: std::equal has not been designed to do such things. What a programmer expects is just a pair-wise comparison, eventually using a custom predicate. You know, don’t fall into the common “if all you have is a hammer everything looks like a nail”. Just say no. The more you practice with standard algorithms, the more they will pop up in case they match. Pretty much automatically.

Flexibility comes with responsibility. Standard algorithms constitute well-known patterns. They help understand the code because other people have clear expectations from them. Hacking standard algorithms can lead to disasters.

If you miss a block, at least two ways are doable: if you don’t need to reuse it, go with a local solution. If reuse is important (and generally it is) then write a function designed to be reused. This requires some care, and keeping in mind how standard C++ algorithms are designed can be a good starting point.

Since “some care” requires time, in competitive programming this is not always feasible. For this reason people tend to go with custom solutions, even for summing the elements of an array. In this series I have more time and there is nothing to prevent me to articulate and to show some solutions based on simple functions that C++ misses. Pragmatic competitive programmers generally keep such functions and snippets on hand. Sometimes the standard already provides such functions, and it’s just the matter of practicing.

Although standard algorithms are really basic, they support a rich flexibility, as I said. This means you can roll your own special iterator and solve the problem in a single pass – imagine also ostream supports operator<< for pairs:

Is it worth? There is not a single answer to this question. It’s really contextual, it depends on where you work at, if your codebase is too much complicated to need these abstractions, etc. Whenever you introduce (or use from a library) a new block (e.g. adj_it), it should be learnt by you and – hopefully – who works on your codebase.

Ranges should help a lot in these scenarios because in addition to a rich flexibility they have a strong composability. Let me turn the “partial C++17 support” on and show you my range-based solution to the initial problem:

It seems completely different from the inner_product one, but what it really changes is the way of thinking the algorithm. This solution is probably closer to Python than to C++, isn’t it? That’s because more blocks get available and composition plays an important role. This makes our way of thinking the algorithm closer to other paradigms. Again, this is an act of responsibility: choosing between – say – a for loop and a range-based solution can be disputable.

Are the inner_product and the ranges based solutions hard to understand? Definitely they are for programmers who don’t know standard idioms. It’s the same for map.insert that does not insert at all if the element is already there. Is std::map misleading? I know people who spend more time at dumping on standard things than the time they would need to learn standard things properly. Mastering language patterns is the key, and in the next posts I’ll show some patterns involving standard algorithms and containers.

In this series, first I’ll consider pre-C++17 solutions and whenever I can I’ll outline possible C++17 answers.

An invitation about ranges: please, post comments with your suggestions/corrections/opinions on my solutions because I think use cases are important both for Eric and for the whole C++ ecosystem.

#### What’s next

I don’t have a precise idea for the other posts yet. It’s possible I’ll discuss the usage of a certain standard algorithm in different scenarios, or how to deal with some kind of problems, or how to use a container to solve certain challenges, etc.

Definitely, since this series is about competitive programming, I’ll spend some posts on some peculiar aspects of challenges like paying attention to the constraints of the problem, or choosing the proper data type for the solution.

The next post will be kind of preliminary to the others. It’s basically about input and output.

##### Final note about code snippets

You probably noticed I used gist to render code snippets in this post. For the first time I needed to completely bypass WP: yesterday this post was broken because of snippets, with no apparent reason. I tried to fix them but after one hour of attempts I gave up. WP is the enemy of programming bloggers. I know, I should move to Jekyll or something better, but WP still has some advantages. For this reason I’m experimenting this “embedding Gist feature”. The procedure is a bit longer (create gist, paste code, give a sensible name, save, copy the URL into the post) but I think I would spend the same time at fixing WP troubles, so it’s ok for now.