Posts Tagged ‘hackerrank’

One of the first challenges in the HackerRank‘s “Warmup” section is probably the “Hello World” of learning arrays in any language: calculating the sum of a sequence of elements. Although this exercise is trivial, I’ll face with it to break the ice and show you a few concepts that lay the groundwork for more complicated challenges.

I’m assuming you are already familiar with concepts like iterator, container and algorithm. Most of the time I’ll give hints for using these C++ tools effectively in Competitive Programming.

That’s the problem specification: You are given an array of integers of size N. Can you find the sum of the elements in the array? It’s guaranteed the sum won’t overflow the int32 representation.

First of all, we need an “array of size N”, where N is given at runtime. The C++ STL (Standard Template Library) provides many useful and cleverly designed data structures (containers) we don’t need to reinvent. Sometimes more complicated challenges require us to write them from scratch. Advanced exercises reveal less common data structures that cannot be light-heartedly included into the STL. We’ll deal with some examples in the future.

It’s not our case here. The primary lesson of this post is: don’t reinvent the wheel. Many times standard containers fit your needs, especially the simplest one: std::vector, basically a dynamic sequence of contiguous elements:

For the purpose of this basic post, here is a list of important things to remember about std::vector:

  • it’s guaranteed to store elements contiguously, so our cache will love it;
  • elements can be accessed through iterators, using offsets on regular pointers to elements, using the subscript operator (e.g. v[index]) and with convenient member functions (e.g. at, front, back);
  • it manages its size automatically: it can enlarge as needed. The real capacity of the vector is usually different from its length (size, in STL speaking);
  • enlarging that capacity can be done explicitly by using reserve member function, that is the standard way to gently order to the vector: “get ready for accommodating N elements”;
  • adding a new element at the end of the vector (push_back/emplace_back) may not cause relocation as far as the internal storage can accommodate this extra element (that is: vector.size() + 1 <= vector.capacity());
  • on the other hand, adding (not overwriting) an entry to any other position requires to relocate the vector (eventually in the same block of memory, if the capacity allows that), since the contiguity has to be guaranteed;
  • the previous point means that inserting an element at the end is generally faster than inserting it at any other position (for this reason std::vector provides push_back, emplace_back and pop_back member functions);
  • knowing in advance the number of elements to store is an information that can be exploited by applying the synergic duo reserve + push_back (or emplace_back).

The latter point leads to an important pattern: inserting at the end is O(1) as far as the vector capacity can accommodate the extra element – vector.size() + 1 <= vector.capacity(). You may ask: why not enlarging the vector first and then just assign values? We can do that by calling resize:

resize enlarges the vector up to N elements. The new elements must be initialized to some value, or to the default one – as in this case. This additional work does not matter in this challenge, however initialization may – in general – cause some overhead (read, for example, these thoughts by Thomas Young). As a reader pointed out on reddit, push_back hides a branching logic that can cause some cost. For this reason he suggests that two sequential passes over the data (that is contigous) may be faster. I think this can be true especially for small data, however the classical recommendation is to profile your code in case of such questions. In my opinion getting into the habit of using reserve + *_back is better and potentially faster in general cases.

The heart of the matter is: need a dynamic array? Consider std::vector. In competitive programming std::vector is 99% of the time the best replacement for a dynamic C-like array (e.g. T* or T**). 1% is due to more advanced challenges requiring us to design different kind of dynamic arrays that break some std::vector’s guarantees to gain some domain-specific performance. Replacing std::vector with custom optimized containers is more common in real-life code (to have an idea, give a look for example here, here and here).

If N was given at compile-time, a static array could be better (as far as N is small – say less than one million – otherwise we get a stack overflow). For this purpose, std::array is our friend – basically a richer replacement of T[]. “Richer replacement” means that std::array is the STL-adaptation of a C-array. It provides member functions we generally find in STL containers like .size(), .at(), .begin()/.end(). std::array combines the performance and accessibility of a C-style array with the benefits of a standard container. Just use it.

Since much information is stated in the problem’s requirements, we’ll see that static-sized arrays are extraordinarily useful in competitive programming. In the future I’ll spend some time about this topic.

Now, let’s look at my snippet again: can we do better? Of course we can (from my previous post):

At this point we have the vector filled and we need to compute the sum of the elements. A hand-made for loop could do that:

Can we do better?

Sure, by using the first numeric algorithm of this series: ladies and gentlemen, please welcome std::accumulate:

One of the most important loops in programming is one that adds a range of things together. This abstraction is known as reduction or fold. In C++, reduction is mimicked by std::accumulate. Basically, it accumulates elements from left to right by applying a binary operation:

accumulate with three parameters uses operator+ as binary operation.

std::accumulate guarantees:

  • the order of evaluation is left to right (known also as left fold), and
  • the time complexity is linear in the length of the range, and
  • if the range is empty, the initial value is returned (that’s why we have to provide one).

The reduction function appears in this idiomatic form:

So the result type may be different from the underlying type of the range (ElementType). For example, given a vector of const char*, here is a simple way to calculate the length of the longest string by using std::accumulate (credits to Davide Di Gennaro for having suggested this example):

To accumulate from the right (known as right fold) we just us reverse iterators:

Right fold makes some difference – for example – when a non-associative function (e.g. subtraction) is used.

In functional programming fold is very generic and can be used to implement other operations. In this great article, Paul Keir describes how to get the equivalent results in C++ by accommodating std::accumulate.

Does std::accumulate have any pitfalls? There exist cases where a+=b is better than a = a + b (the latter is what std::accumulate does in the for loop). Although hacks are doable, I think if you fall into such scenarios, a for loop would be the simplest and the most effective option.

Here is another example of using std::accumulate to multiply the elements of a sequence:

std::multiplies<> is a standard function object (find others here).

Using standard function objects makes the usage of algorithms more succinct. For example, the problem of finding the missing number from an array of integers states: given an array of N integers called “baseline” and another array of N-1 integers called “actual”, find the number that exists in “baseline” but not in “actual”. Duplicates may exist. (this problem is a generalization of the “find the missing number” problem, where the first array is actually a range from 0 to N and a clever solution is to apply the famous Gauss’ formula N(N+1)/2 and subtracting this value from the sum of the elements “actual”). An example:

The missing number is 2.

A simple linear solution is calculating the sum of both the sequences and then subtracting the results. This way we obtain the missing number. This solution may easily result in integer overflow, that is undefined behavior in C++. Another wiser solution consists in xor-ing the elements of both the arrays and then xoring the results.

Xor is a bitwise operation – it does not “need” new bits – and then it never overflows. To realize how this solution works, remember how the xor works:

Suppose that “a” is the result of xor-ing all the elements but the missing one – basically it’s the result of xor-ing “actual”. Call the missing number “b”. This means that xor-ing “a” with the missing element “b” results in xor-ing together the elements in the “baseline” array. Call the total “c”. We have all the information to find the missing value since “a” ^ “c” is “b”, that is just the missing number. That’s the corresponding succint C++ code:

Let’s go back to the initial challenge. We can do even better.

To realize how, it’s important to get into the habit of thinking in terms of iterators rather than containers. Since standard algorithms work on ranges (pairs of iterators), we don’t need to store input elements into the vector at all:

Advancing by one – using next – is a licit action since the problem rigorously describes what the input looks like. This snippet solves the challenge in a single line, in O(n) time and O(1) space. That’s pretty good. It’s also our first optimization (actually not required) since our solution dropped to O(1) space – using std::vector was O(n).

That’s an example of what I named “standard reasoning” in the introduction of this series. Thinking in terms of standard things like iterators – objects making algorithms separated from containers – is convenient and it should become a habit. Although it seems counterintuitive, from our perspective of C++ coders thinking in terms of iterators is not possible without knowing containers. For example we’ll never use std::find on std::map, instead we’ll call the member function std::map.find(), and the reason is that we know how std::map works. In a future post I’ll show you other examples about this sensible topic.

Our solution leads to ranges naturally:

view::tail takes all the elements starting from the second (again, I skipped the input length), and ranges::istream is a convenient function which generates a range from an input stream (istream_range). If we had needed to skip more elements at the beginning, we would have used view::drop, which removes the first N elements from the front of a source range.

Iterators-based and ranges-based solutions version look very similar, however – as I said in the introduction of this series – iterators are not easy to compose whereas ranges are composable by design. In the future we’ll see examples of solutions that look extremely different because of this fact.

In Competitive Programming these single-pass algorithms are really useful. The STL provides several single-pass algorithms for accomplishing tasks like finding an element in a range, counting occurrences, veryfing some conditions, etc. We’ll see other applications in this series.

In the next post we’ll meet another essential container – std::string – and we’ll see other algorithms.

Recap for the pragmatic C++ competitive coder:

  • Don’t reinvent containers whenever standard ones fit your needs:
    • Dynamic array? (e.g. int*) Consider std::vector
    • Static array? (e.g. int[]) Use std::array
  • Prefer standard algorithms to hand-made for loops:
    • often more efficient, more correct and consistent,
    • more maintainable (a “language in the language”)
    • use standard function objects when possible
    • use std::accumulate to combine a range of things together
    • If customizing an algorithm results in complicated code, write a loop instead
  • Think in terms of standard things:
    • iterators separate algorithms from containers
    • understand containers member functions
Advertisements

In the last two years I have turned into a Competitive Programming aficionado, mostly active on HackerRank, less on TopCoder and CodingGame. This kind of websites enable coders not only to compete with others, but also to practice by solving several categorized exercises. After you write your program into the browser, they run and test it against some prepared test-cases. In all probability you can use your favorite language because they support pretty much everything, but I consider really precious spending time with other languages as well.

When I started I just wanted to challenge myself and push my brain beyond its limits. I really appreciated that facing with some kind of problems was a good excuse to explore new areas of CS and learn something new. Even if at that time it was not my target, competitive programming is a great way to prepare for technical job interviews.

Speaking in general, there are several reasons to get involved in Competitive Programming. I found a very nice article about that here. I quote the point that was the most important for me when I started: it’s guaranteed brain excercise: “On any given day, what you’re asked to do at work may or may not be challenging, or you may be exercising one of the many non-technical skills required of software developers. By practicing regularly with competitive programming problems, you can ensure that the coding part of your brain receives a regular workout. It’s like one of those brain training apps, but for real skills.”

Regardless of your programming language, spend some time on competitive coding.

Why C++ in Competitive Programming?

 

If you solve on websites like HackerRank a challenge then you may peek into other people solutions. Looking at C++ solutions, I have found a lot of “C++ people” using C++ mostly as C. Many people don’t consider (don’t know?) what the standard offers. On the other hand, I find it natural to face challenges with modern C++ by my side, coding as simply as possible. For example, often I ask myself: may I use an algorithm instead of this for loop? This attitude is worthwhile here as it has been for years in my daily job.

I realize the word modern is overloaded: in the last years we all have probably heard the expression modern C++ everywhere and sometimes it looked like a buzzword. I mean using the full standard toolset of C++, without external libraries, nor over complicated abstractions. Mainly because in competitive programming you don’t have libraries (sometimes you don’t have the standard library neither) and you cannot lose time with too much generalization. Many times the same results fit real life development.

It’s worth noting that I’m not going to explain things like “how to make a linked list”. Excellent algorithms & ds courses/books/series have been developed for this purpose. Rather, expect to see how I used a forward list to solve a problem, where I found necessary a multiset, or when lower_bound saved my life.

I have told to some friends about this series and they suspected that C++ was not the right language for talking about Competitive Programming. Actually I had a look at the top ranked people on websites like TopCoder and HackerRank and C++ was always there (together with Java and Python, mainly). I found another indicative example on Google Code Jam: more of the higher-performing contestants code in C++.

I’m not surprised at all.

Certainly C and C++ are considered the lingua franca of algorithms and data-structures, but I think the main reason is the control C++ offers and its independence of the paradigm:  it does not force you a single “way of doing” to solve a challenge.

My friend Alessandro Vergani has another sensible idea: traditionally C and C++ have a poor standard library compared to newer languages like Java and C#, for this reason C/C++ programmers are at ease with writing everything they need from scratch – and often too much. This means they are fluent with coding things that in other languages are took for granted.

Moreover, I think since lots of participants are students, C++ is very common as a “first language”.

 

Challenges are about trade-off

 

My purpose is solving challenges without reinventing the wheel. This means if I can use standard things to make all the test cases pass, I’ll do it. Obviously, I’ll justify my decisions and I’ll discuss a bit. For example, what about this snippet for pretty printing a non-zero unsigned 32-bit integer N into its binary representation:

It’s simple, but it’s not as efficient as it can be if written in other ways.

Does it suffice to solve the problem?

Maybe.

If so, I would be happy with this simple solution. If not, the challenge is asking me to find better ways to solve the problem.

Challenges are about trade-off. The requirements and the constraints of the problem matter a lot. They are a rough approximation of a real problem. Often you can exploit the constraints of the problem and take a shortcut. This may signify optimizations, or simplification, etc. Very often you have to exploit the constraints of the problem to solve it.

Many times these shortcuts are not so far from real life needs.

 

Standard reasoning

 

How many times you heard commands like “use standard algorithms” or “use a container”. With luck, we all know the classical resons for doing that: efficiency, correctness, maintainability. Now there is another fact we could care about: the future of our language.

Let me articulate.

Probably the biggest limitation of the STL is the lack of composability. One of the most mind-changing addition to C++17 will be Eric Niebler’s ranges. Ranges are basically a shift from iterators to a superior abstraction. Ranges enable programmers to write fluent and – hopefully – efficient algorithms.

Coding with ranges produces “pipelined” transformations. For example, to generate an infinite list of integers starting at 1, consider only even values, square them, take the first 10, and sum them:

It’s not my intention to talk about ranges and I am sure you rule them better than me (if not, give it a try and refer to the documentation). My point is: Eric’s proposal includes standard combinators like transform and take, and mutating ones like sort and unique. They are essentially the counterparts of those in the STL. Thinking in terms of standard blocks help embrace ranges effectively. If you are used to employ standard functions, you are already confident with several range views and algorithms.

When I can, I’ll try to show my solutions by using ranges as well. Clearly ranges are not part of the standard yet then these solutions are not intended to run on competitive programming sites yet. I’ll adoperate Clang on Visual Studio 2015 for my attempts.

 

Standard flexibility

 

Many people don’t employ standard algorithms massively. For some of them the reason is that algorithms are often closer to math than to programming – even if programming is math…

Consider the simple task of finding the minimum difference in a sequence of ascending sorted numbers. For example, in this sequence:

[10, 20, 30, 100, 200, 300, 1000]

10 is the minimum absolute difference (e.g. 20-10).

This is an explicit solution:

That solves the problem in a single pass.

The same problem written in Python:

minDiff = min([r - l for l, r in zip(elems, elems[1:])])

That is completely different because it employes a functional block that C++ misses: zip. Basically it combines elements pair-wise from N sequences (two in this case).

Look more closely at the C++ solution.

It’s basically an accumulation, right? In each iteration we calculate the difference between two adjacent elements and then we update the global minimum. But it’s not a std::accumulate because that function cannot operate on two sequences at the same time – at least without special iterators.

Any idea?

I am sure you found it. Your standard friend is inner_product:

We’ll turn back to inner_product later in this series because it’s powerful, but just to recap: the simplest overload of inner_product returns the result of accumulating a starting value with the inner products of the pairs formed by the elements of two ranges. That is:

init += range1[0]*range2[0] + range1[1]*range2[1] + ... + range1[N-1]*range2[N-1]

There is more: both sum (+) and multiplication (*) can be customized. In my example, I replaced accumulation with min and product with absolute difference. I also set the initial value to the maximum possible int value – in math I would set it to infinity, the identity of the minimum operation (e.g. min(infinity, value) is value). Note also that by passing [begin, end-1] and [begin+1, end] I basically obtained these “pairs”:

range[0], range[1]
range[1], range[2]
...
range[N-2], range[N-1]

inner_product has a name closer to math than to CS, but it is actually a general function, that hides a rich flexibility. When I say flexibility I am not saying composability. Flexibility means that the same function can be adapted to different scenarios. Standard functions generally have a great flexibility.

But flexibility can lead to misuses…

Suppose after we calculate the minimum difference we want to output the pairs of elements with that difference. Recalling the first example:

[10, 20, 30, 100, 200, 300, 1000]

The minimum difference is 10 and the pairs of elements differing by this value are:

10, 20
20, 30

So basically we need to iterate over adjacent elements and print them if their difference is 10:

[20-10, 30-20, 100-30, ...]

Is there any standard algorithm for this purpose?

std::transform is really close, but it does not really perform what we need mainly for two reasons: first of all, it does not guarantee in-order application of the custom function – or in-order iteration – and second, transformation is N to N, that is each element in the source gets mapped into another thing.

Ideally we need something like a zip and a copy_if or copy_if and a special iterator type wrapping two adjacent elements at the same time – things that are, for example, in boost. For this reason, I think a for loop is the best choice in “bare” C++:

Looking at this code, some C++ hackers may force std::equal to do the same thing, getting some sugar:

The sugar is given by not accessing the vector and not writing the for loop explicitly.

The problem is that the snippet is unclear and hackish: std::equal has not been designed to do such things. What a programmer expects is just a pair-wise comparison, eventually using a custom predicate. You know, don’t fall into the common “if all you have is a hammer everything looks like a nail”. Just say no. The more you practice with standard algorithms, the more they will pop up in case they match. Pretty much automatically.

Flexibility comes with responsibility. Standard algorithms constitute well-known patterns. They help understand the code because other people have clear expectations from them. Hacking standard algorithms can lead to disasters.

If you miss a block, at least two ways are doable: if you don’t need to reuse it, go with a local solution. If reuse is important (and generally it is) then write a function designed to be reused. This requires some care, and keeping in mind how standard C++ algorithms are designed can be a good starting point.

Since “some care” requires time, in competitive programming this is not always feasible. For this reason people tend to go with custom solutions, even for summing the elements of an array. In this series I have more time and there is nothing to prevent me to articulate and to show some solutions based on simple functions that C++ misses. Pragmatic competitive programmers generally keep such functions and snippets on hand. Sometimes the standard already provides such functions, and it’s just the matter of practicing.

Although standard algorithms are really basic, they support a rich flexibility, as I said. This means you can roll your own special iterator and solve the problem in a single pass – imagine also ostream supports operator<< for pairs:

Is it worth? There is not a single answer to this question. It’s really contextual, it depends on where you work at, if your codebase is too much complicated to need these abstractions, etc. Whenever you introduce (or use from a library) a new block (e.g. adj_it), it should be learnt by you and – hopefully – who works on your codebase.

Ranges should help a lot in these scenarios because in addition to a rich flexibility they have a strong composability. Let me turn the “partial C++17 support” on and show you my range-based solution to the initial problem:

It seems completely different from the inner_product one, but what it really changes is the way of thinking the algorithm. This solution is probably closer to Python than to C++, isn’t it? That’s because more blocks get available and composition plays an important role. This makes our way of thinking the algorithm closer to other paradigms. Again, this is an act of responsibility: choosing between – say – a for loop and a range-based solution can be disputable.

Are the inner_product and the ranges based solutions hard to understand? Definitely they are for programmers who don’t know standard idioms. It’s the same for map.insert that does not insert at all if the element is already there. Is std::map misleading? I know people who spend more time at dumping on standard things than the time they would need to learn standard things properly. Mastering language patterns is the key, and in the next posts I’ll show some patterns involving standard algorithms and containers.

In this series, first I’ll consider pre-C++17 solutions and whenever I can I’ll outline possible C++17 answers.

An invitation about ranges: please, post comments with your suggestions/corrections/opinions on my solutions because I think use cases are important both for Eric and for the whole C++ ecosystem.

 

What’s next

 

I don’t have a precise idea for the other posts yet. It’s possible I’ll discuss the usage of a certain standard algorithm in different scenarios, or how to deal with some kind of problems, or how to use a container to solve certain challenges, etc.

Definitely, since this series is about competitive programming, I’ll spend some posts on some peculiar aspects of challenges like paying attention to the constraints of the problem, or choosing the proper data type for the solution.

The next post will be kind of preliminary to the others. It’s basically about input and output.

Final note about code snippets

You probably noticed I used gist to render code snippets in this post. For the first time I needed to completely bypass WP: yesterday this post was broken because of snippets, with no apparent reason. I tried to fix them but after one hour of attempts I gave up. WP is the enemy of programming bloggers. I know, I should move to Jekyll or something better, but WP still has some advantages. For this reason I’m experimenting this “embedding Gist feature”. The procedure is a bit longer (create gist, paste code, give a sensible name, save, copy the URL into the post) but I think I would spend the same time at fixing WP troubles, so it’s ok for now.