Using libjpeg in C++

Just for fun I thought it would be interesting to write a quick program to convert .jpg images to an ASCII representation. Here’s a sample output:

I was surprised that there are not more examples online showing how to use libjpeg, and even fewer wrapping it in a more cpp-friendly manner.

I decided to make the functionality re-usable, so if you have a need to read in a .jpg file, modify it, and save it, then this is going to be useful to you.

One point of interest, the shrink() member function – initially I wrote it very simply. I went through each (new) pixel in turn and calculated it as the average of a box of pixels in the original, larger, bitmap. This worked fine, but at the time I knew it was inefficient. Jumping around a surrounding box means cache misses galore. The best way to do this is one line at a time – this means we’re processing data already in processor cache lines, and is simpler for the CPU to do correct branch prediction. The version you see in the current source was my more considered version – it ended up three times faster than the initial version (and as it’s on github you can compare it to the old version in the history). I’ve left the getAverage() member function in, in case it is useful for anyone, but no longer use it in shrink().

If you’re on a linux distro you’ll undoubtedly have libjpeg (or one of its forks) installed. To use it and my source, you’ll need to install the header files for it, with:

sudo apt install libjpeg-dev

You can also install it on Windows, as a test I used Microsoft’s vcpkg to get the header files.

You can see the code at https://github.com/md81544/libjpeg_cpp -you’ll want a C++11-compatible compiler for it. I’ve tested the latest version with both g++ and clang++.

Getting Started with Boost MultiIndex

Note: this is an introduction to / overview of Boost MultiIndex. It is not intended to be in-depth.

So. Containers.  99% of the time, std::vector, out of all the choices you have in the STL, is the right choice. The next most obvious is std::map if you need to “index” access to data: for example if you store 1,000,000 phone numbers and you want to quickly find one given a name, a std:map will give you “indexed” access – you look up your data in O(log n) time (or potentially quicker if you’re using a std::unordered_map) as opposed to O(n) time for an unsorted std::vector. (Obviously these are theoretical complexities – see a previous post which advocates actual benchmarking, as processor cache and branch prediction can have a huge effect).

Continue reading “Getting Started with Boost MultiIndex”

Big O – Theory vs Practice

I saw a question online recently which basically asked whether big-O notation always holds true in practice. For example, in theory, searching through a vector for an item should be O(n) whereas searching a map should be O(log n). Right?

Let’s imagine we create a vector with 100,000 random integers in it. We then sort the vector, and run two different searches on it. The first search starts at the beginning, compares the element against the value we’re searching for, and simply keeps incrementing until we find it. A naive search you might say. Without resorting to std:: algorithms, how might you improve it?

A worthy idea might be a recursive binary, or bisection, search. We might, for example, make 15 bisections to find a value in the upper quadrant of our range, compared to, say 75,000 comparisons in the brute force approach.

That’s fairly safe to say that the binary search is going to be much faster, right?

Not necessarily. See the results below:

100000 random numbers generated, inserted, and sorted in a vector:
 0.018477s wall, 0.010000s user + 0.000000s system = 0.010000s CPU (54.1%)

Linear search iterations: = 75086, time taken:
 0.000734s wall, 0.000000s user + 0.000000s system = 0.000000s CPU (n/a%)

Binary search iterations: 17, time taken:
 0.001327s wall, 0.000000s user + 0.000000s system = 0.000000s CPU (n/a%)

Are you surprised to see that the binary search took (very) slightly longer than the brute force approach?

Welcome to the world of modern processors. What’s going on there is a healthy dose of branch prediction and plenty of valid processor cache hits which actually makes the brute force approach viable.

I guess the moral of this story is, as always, don’t optimise prematurely – because you might actually be making things worse! Always profile your hot spots and work empirically rather than on what you think you know about algorithm efficiency.

The code used here is reproduced below – it should compile on VS 2015, clang and g++ without issue. You’ll need Boost for the timers. You may see wildly different timings depending on optimisation levels and other factors :)

#include <iostream>
#include <vector>
#include <random>
#include <algorithm>
#include <string>

#include "boost/timer/timer.hpp"

const int MAX_VAL = 100'000;

template <typename T>
T LinearSearch(typename std::vector<T>::iterator begin,
    typename std::vector<T>::iterator end, T value)
{
    // While not ostensibly efficient, branch prediction and CPU cache for the
    // contiguous vector data should make this go like greased lightning :)
    size_t counter{0};
    while (begin < end)
    {
        if (*begin >= value)
        {
            std::cout << "\nLinear search iterations: = " << counter;
            return *begin;
        }
        ++begin;
        ++counter;
    }
    return 0;
}

template <typename T>
T BinarySearch(typename std::vector<T>::iterator begin,
    typename std::vector<T>::iterator end, T value)
{
    // The return value is not exact... just interested in getting close to the
    // value.
    static size_t counter{0};
    size_t mid = (end - begin) / 2;
    ++counter;
    if (begin >= end)
    {
        std::cout << "\nBinary search iterations: " << counter;
        return *begin;
    }
    if (*(begin + mid) < value)
    {
        return BinarySearch(begin + mid + 1, end, value);
    }
    else if (*(begin + mid) > value)
    {
        return BinarySearch(begin, begin + mid - 1, value);
    }
    else
    {
        std::cout << "\nBinary search iterations: " << counter;
        return value;
    }
}

int main()
{
    // Fill a vector with MAX_VAL random numbers between 0 - MAX_VAL
    std::vector<unsigned int> vec;
    vec.reserve(MAX_VAL);

    {
        boost::timer::auto_cpu_timer tm;
        std::random_device randomDevice;
        std::default_random_engine randomEngine(randomDevice());
        std::uniform_int_distribution<unsigned int> uniform_dist(0, MAX_VAL);
        for (int n = 0; n < MAX_VAL; ++n)
        {
            vec.emplace_back(uniform_dist(randomEngine));
        }
        // Sort the vector
        std::sort(vec.begin(), vec.end());
        std::cout << MAX_VAL
                  << " random numbers generated, inserted, and sorted"
                     " in a vector:" << std::endl;
    }

    {
        boost::timer::auto_cpu_timer tm2;
        LinearSearch(
            vec.begin(), vec.end(), static_cast<unsigned int>(MAX_VAL * 0.75));
        std::cout << ", time taken:" << std::endl;
    }
    {
        boost::timer::auto_cpu_timer tm2;
        BinarySearch(vec.begin(), vec.end() - 1,
            static_cast<unsigned int>(MAX_VAL * 0.75));
        std::cout << ", time taken:" << std::endl;
    }

    return 0;
}

C++ Custom Deleters: unique_ptr vs shared_ptr

C++11 gives us two useful indispensable smart pointers, std::unique_ptr and std::shared_ptr. So much has been written about these that there’s no point me re-hashing anything other than to re-iterate that if you are using “naked” owning pointers in your code these days, you are simply doing it wrong.

I wanted to mention briefly the use of custom deleters with smart pointers. If you haven’t looked at this aspect of smart pointers before, it basically gives us a way to specify what should happen when the smart pointer goes out of scope.
Continue reading “C++ Custom Deleters: unique_ptr vs shared_ptr”