Unknown's avatar

Posts by Marius Bancila

Software architect, author, speaker, Microsoft MVP Developer Technologies

C++11 concurrency: locks revisited

In a previous post about locks in C++11 I have shown a dummy implementation of a container class that looked like this (simplified):

template <typename T>
class container 
{
   std::recursive_mutex _lock;
   std::vector<T> _elements;
public:
   void dump()
   {
      std::lock_guard<std::recursive_mutex> locker(_lock);
      for(auto e : _elements)
         std::cout << e << std::endl;
   }
};

One can argue that the dump() method does not alter the state of the container and should be (logically) const. However, as soon as you make it const you get the following error:

'std::lock_guard::lock_guard(_Mutex &)' 
   : cannot convert parameter 1 from 'const std::recursive_mutex'
     to 'std::recursive_mutex &'

The mutex (regardless which of the four flavors available in C++11) must be acquired and released and the lock() and unlock() operations are not constant. So the argument the lock_guard takes cannot be logically const, as it would be if the method was const.

The solution to this problem is to make the mutex mutable. Mutable allows changing state from const functions. It should however be used only for hidden or “meta” state (imagine caching computed or looked-up data so a next call can complete immediately, or altering bits like a mutex that only complement the actual state of an object).

template <typename T>
class container 
{
   mutable std::recursive_mutex _lock;
   std::vector<T> _elements;
public:
   void dump() const
   {
      std::lock_guard<std::recursive_mutex> locker(_lock);
      for(auto e : _elements)
         std::cout << e << std::endl;
   }
};

An important thing to note is that in C++11 both const and mutable imply thread-safety. I recommend this C++ and Beyond talk by Herb Sutter called You don’t know [blank] and [blank].

C++11 concurrency: condition variables

In the previous post in this series we have seen the C++11 support for locks and in this post we continue on this topic with condition variables. A condition variable is a synchronization primitive that enables blocking of one or more threads until either a notification is received from another thread or a timeout or a spurious wake-up occurs.

There are two implementations of a condition variable that are provided by C++11:

  • condition_variable: requires any thread that wants to wait on it to acquire a std::unique_lock first.
  • condition_variable_any: is a more general implementation that works with any type that satisfies the condition of a basic lock (basically has a lock() and unlock() method). This might be more expensive to use (in terms of performance and operating system resources), therefore it should be preferred only if the additional flexibility it provides is necessary.

So how does a condition variable work?

  • There must be at least one thread that is waiting for a condition to become true. The waiting thread must first acquire a unique_lock. This lock is passed to the wait() method, that releases the mutex and suspends the thread until the condition variable is signaled. When that happens the thread is awaken and the lock is re-acquired.
  • There must be at least one thread that is signaling that a condition becomes true. The signaling can be done with notify_one() which unblocks one thread (any) that is waiting for the condition to be signaled or with notify_all which unblocks all the threads that are waiting for the condition.
  • Because of some complications in making the condition wake-up completely predictable on multiprocessor systems, spurious wake-ups can occur. That means a thread is awaken even if nobody signaled the condition variable. Therefore it is necessary to check if the condition is still true after the thread has awaken. And since spurious wake-ups can occur multiple times, that check must be done in a loop.

The code below shows an example of using a condition variable to synchronize threads: several “worker” threads may produce an error during their work and they put the error code in a queue. A “logger” thread processes these error codes, by getting them from the queue and printing them. The workers signal the logger when an error occurred. The logger is waiting on the condition variable to be signaled. To avoid spurious wakeups the wait happens in a loop where a boolean condition is checked.

#include <thread>
#include <mutex>
#include <condition_variable>
#include <iostream>
#include <queue>
#include <random>

std::mutex              g_lockprint;
std::mutex              g_lockqueue;
std::condition_variable g_queuecheck;
std::queue<int>         g_codes;
bool                    g_done;
bool                    g_notified;

void workerfunc(int id, std::mt19937& generator)
{
    // print a starting message
    {
        std::unique_lock<std::mutex> locker(g_lockprint);
        std::cout << "[worker " << id << "]\trunning..." << std::endl;
    }

    // simulate work
    std::this_thread::sleep_for(std::chrono::seconds(1 + generator() % 5));

    // simulate error
    int errorcode = id*100+1;
    {
        std::unique_lock<std::mutex> locker(g_lockprint);
        std::cout << "[worker " << id << "]\tan error occurred: " << errorcode << std::endl;
    }

    // notify error to be logged
    {
        std::unique_lock<std::mutex> locker(g_lockqueue);
        g_codes.push(errorcode);
        g_notified = true;
        g_queuecheck.notify_one();
    }
}

void loggerfunc()
{
    // print a starting message
    {
        std::unique_lock<std::mutex> locker(g_lockprint);
        std::cout << "[logger]\trunning..." << std::endl;
    }

    // loop until end is signaled
    while(!g_done)
    {
        std::unique_lock<std::mutex> locker(g_lockqueue);

        while(!g_notified) // used to avoid spurious wakeups 
        {
            g_queuecheck.wait(locker);
        }

        // if there are error codes in the queue process them
        while(!g_codes.empty())
        {
            std::unique_lock<std::mutex> locker(g_lockprint);
            std::cout << "[logger]\tprocessing error:  " << g_codes.front() << std::endl;
            g_codes.pop();
        }

        g_notified = false;
    }
}

int main()
{
    // initialize a random generator
    std::mt19937 generator((unsigned int)std::chrono::system_clock::now().time_since_epoch().count());

    // start the logger
    std::thread loggerthread(loggerfunc);

    // start the working threads
    std::vector<std::thread> threads;
    for(int i = 0; i < 5; ++i)
    {
        threads.push_back(std::thread(workerfunc, i+1, std::ref(generator)));
    }

    // work for the workers to finish
    for(auto& t : threads)
        t.join();

    // notify the logger to finish and wait for it
    g_done = true;
    loggerthread.join();

    return 0;
}

Running this code produce an output that looks like this (notice this output is different with each run because each worker thread works, i.e. sleeps, for a random interval):

[logger]        running...
[worker 1]      running...
[worker 2]      running...
[worker 3]      running...
[worker 4]      running...
[worker 5]      running...
[worker 1]      an error occurred: 101
[worker 3]      an error occurred: 301
[worker 2]      an error occurred: 201
[logger]        processing error:  101
[logger]        processing error:  301
[logger]        processing error:  201
[worker 5]      an error occurred: 501
[logger]        processing error:  501
[worker 4]      an error occurred: 401
[logger]        processing error:  401

The wait() method seen above has two overloads:

  • one that only takes a unique_lock; this one releases the lock, blocks the thread and adds it to the queue of threads that are waiting on this condition variable; the thread wakes up when the the condition variable is signaled or when a spurious wakeup occurs. When any of those happen, the lock is reacquired and the function returns.
  • one that in addition to the unique_lock also takes a predicate that is used to loop until it returns false; this overload may be used to avoid spurious wakeups. It is basically equivalent to:
    while(!predicate()) 
       wait(lock);

As a result the use of the boolean flag g_notified in the example above can be avoided by using the wait overload that takes a predicate that verifies the state of the queue (empty or not):

void workerfunc(int id, std::mt19937& generator)
{
    // print a starting message
    {
        std::unique_lock<std::mutex> locker(g_lockprint);
        std::cout << "[worker " << id << "]\trunning..." << std::endl;
    }

    // simulate work
    std::this_thread::sleep_for(std::chrono::seconds(1 + generator() % 5));

    // simulate error
    int errorcode = id*100+1;
    {
        std::unique_lock<std::mutex> locker(g_lockprint);
        std::cout << "[worker " << id << "]\tan error occurred: " << errorcode << std::endl;
    }

    // notify error to be logged
    {
        std::unique_lock<std::mutex> locker(g_lockqueue);
        g_codes.push(errorcode);
        g_queuecheck.notify_one();
    }
}

void loggerfunc()
{
    // print a starting message
    {
        std::unique_lock<std::mutex> locker(g_lockprint);
        std::cout << "[logger]\trunning..." << std::endl;
    }

    // loop until end is signaled
    while(!g_done)
    {
        std::unique_lock<std::mutex> locker(g_lockqueue);

        g_queuecheck.wait(locker, [&](){return !g_codes.empty();});

        // if there are error codes in the queue process them
        while(!g_codes.empty())
        {
            std::unique_lock<std::mutex> locker(g_lockprint);
            std::cout << "[logger]\tprocessing error:  " << g_codes.front() << std::endl;
            g_codes.pop();
        }
    }
}

In addition to this wait() overloaded method there are two more waiting methods, both with similar overloads that take a predicate to avoid spurious wake-ups:

  • wait_for: blocks the thread until the condition variable is signaled or the specified timeout occurred.
  • wait_until: blocks the thread until the condition variable is signaled or the specified moment in time was reached.

The overload without a predicate of these two functions returns a cv_status that indicates whether a timeout occurred or the wake-up happened because the condition variable was signaled or because of a spurious wake-up.

The standard also provides a function called notified_all_at_thread_exit that implements a mechanism to notify other threads that a given thread has finished, including destroying all thread_local objects. This was introduced because waiting on threads with other mechanisms than join() could lead to incorrect and fatal behavior when thread_locals were used, since their destructors could have been called even after the waiting thread resumed and possible also finished (see N3070 and N2880 for more). Typically, a call to this function must happen just before the thread exists.

Below is an example of how notify_all_at_thread_exit can be used together with a condition_variable to synchronize two threads:

std::mutex              g_lockprint;
std::mutex              g_lock;
std::condition_variable g_signal;
bool                    g_done;

void workerfunc(std::mt19937& generator)
{
   {
      std::unique_lock<std::mutex> locker(g_lockprint);
      std::cout << "worker running..." << std::endl;
   }

   std::this_thread::sleep_for(std::chrono::seconds(1 + generator() % 5));

   {
      std::unique_lock<std::mutex> locker(g_lockprint);
      std::cout << "worker finished..." << std::endl;
   }

   std::unique_lock<std::mutex> lock(g_lock);
   g_done = true;
   std::notify_all_at_thread_exit(g_signal, std::move(lock));
}

int main()
{
   // initialize a random generator
   std::mt19937 generator((unsigned int)std::chrono::system_clock::now().time_since_epoch().count());

   std::cout << "main running..." << std::endl;

   std::thread worker(workerfunc, std::ref(generator));
   worker.detach();

   std::cout << "main crunching..." << std::endl;

   std::this_thread::sleep_for(std::chrono::seconds(1 + generator() % 5));

   {
      std::unique_lock<std::mutex> locker(g_lockprint);
      std::cout << "main waiting for worker..." << std::endl;
   }

   std::unique_lock<std::mutex> lock(g_lock);
   while(!g_done) // avoid spurious wake-ups
      g_signal.wait(lock);

   std::cout << "main finished..." << std::endl;

   return 0;
}

That would output either (if the worker finishes work before main)

main running...
worker running...
main crunching...
worker finished...
main waiting for worker...
main finished...

or (if the main finishes work before the worker):

main running...
worker running...
main crunching...
main waiting for worker...
worker finished...
main finished...

C++11 concurrency: locks

In a previous post I introduced the C++11 support for threads. In this article I will discuss the locking features provided by the standard that one can use to synchronize access to shared resources.

The core syncing primitive is the mutex, which comes in four flavors, in the <mutex> header:

  • mutex: provides the core lock() and unlock() and the non-blocking try_lock() method that returns if the mutex is not available.
  • recursive_mutex: allows multiple acquisitions of the mutex from the same thread.
  • timed_mutex: similar to mutex, but it comes with two more methods try_lock_for() and try_lock_until() that try to acquire the mutex for a period of time or until a moment in time is reached.
  • recursive_timed_mutex: is a combination of timed_mutex and recusive_mutex.

Here is a simple example of using a mutex to sync the access to the std::cout shared object.

#include <iostream>
#include <thread>
#include <mutex>
#include <chrono>

std::mutex g_lock;

void func()
{
    g_lock.lock();

    std::cout << "entered thread " << std::this_thread::get_id() << std::endl;
    std::this_thread::sleep_for(std::chrono::seconds(rand() % 10));
    std::cout << "leaving thread " << std::this_thread::get_id() << std::endl;

    g_lock.unlock();
}

int main()
{
    srand((unsigned int)time(0));

    std::thread t1(func);
    std::thread t2(func);
    std::thread t3(func);

    t1.join();
    t2.join();
    t3.join();

    return 0;
}

In the next example we’re creating a simple thread-safe container (that just uses std::vector internally) that has methods like add() and addrange(), with the later implemented by calling the first.

template <typename T>
class container 
{
    std::mutex _lock;
    std::vector<T> _elements;
public:
    void add(T element) 
    {
        _lock.lock();
        _elements.push_back(element);
        _lock.unlock();
    }

    void addrange(int num, ...)
    {
        va_list arguments;

        va_start(arguments, num);

        for (int i = 0; i < num; i++)
        {
            _lock.lock();
            add(va_arg(arguments, T));
            _lock.unlock();
        }

        va_end(arguments); 
    }

    void dump()
    {
        _lock.lock();
        for(auto e : _elements)
            std::cout << e << std::endl;
        _lock.unlock();
    }
};

void func(container<int>& cont)
{
    cont.addrange(3, rand(), rand(), rand());
}

int main()
{
    srand((unsigned int)time(0));

    container<int> cont;

    std::thread t1(func, std::ref(cont));
    std::thread t2(func, std::ref(cont));
    std::thread t3(func, std::ref(cont));

    t1.join();
    t2.join();
    t3.join();

    cont.dump();

    return 0;
}

Running this program results in a deadlock.

The reason for the deadlock is that the tread that own the mutex cannot re-acquire the mutex, and such an attempt results in a deadlock. That’s were recursive_mutex come into picture. It allows a thread to acquire the same mutext multiple times. The maximum number of times is not specified, but if that number is reached, calling lock would throw a std::system_error. Therefore to fix this implementation (apart from changing the implementation of addrange not to call lock and unlock) is to replace the mutex with a recursive_mutex.

template <typename T>
class container 
{
    std::recursive_mutex _lock;
    // ...
};

Then the output looks something like this:

6334
18467
41
6334
18467
41
6334
18467
41

Notice the same numbers are generated in each call to func(). That is because the seed is thread local, and the call to srand() only initializes the seed from the main thread. In the other worker threads it doesn’t get initialized, and therefore you get the same numbers every time.

Explicit locking and unlocking can lead to problems, such as forgetting to unlock or incorrect order of locks acquiring that can generate deadlocks. The standard provides several classes and functions to help with this problems.

The wrapper classes allow consistent use of the mutexes in a RAII-style with auto locking and unlocking within the scope of a block. These wrappers are:

  • lock_guard: when the object is constructed it attempts to acquire ownership of the mutex (by calling lock()) and when the object is destructed it automatically releases the mutex (by calling unlock()). This is a non-copyable class.
  • unique_lock: is a general purpose mutex wrapper that unlike lock_quard also provides support for deferred locking, time locking, recursive locking, transfer of lock ownership and use of condition variables. This is also a non-copyable class, but it is moveable.

With these wrappers we can rewrite the container class like this:

template <typename T>
class container 
{
    std::recursive_mutex _lock;
    std::vector<T> _elements;
public:
    void add(T element) 
    {
        std::lock_guard<std::recursive_mutex> locker(_lock);
        _elements.push_back(element);
    }

    void addrange(int num, ...)
    {
        va_list arguments;

        va_start(arguments, num);

        for (int i = 0; i < num; i++)
        {
            std::lock_guard<std::recursive_mutex> locker(_lock);
            add(va_arg(arguments, T));
        }

        va_end(arguments); 
    }

    void dump()
    {
        std::lock_guard<std::recursive_mutex> locker(_lock);
        for(auto e : _elements)
            std::cout << e << std::endl;
    }
};

Notice that attempting to call try_lock_for() or try_lock_until() on a unique_lock that wraps a non-timed mutex results in a compiling error.

The constructors of these wrapper guards have overloads that take an argument indicating the locking strategy. The available strategies are:

  • defer_lock of type defer_lock_t: do not acquire ownership of the mutex
  • try_to_lock of type try_to_lock_t: try to acquire ownership of the mutex without blocking
  • adopt_lock of type adopt_lock_t: assume the calling thread already has ownership of the mutex

These strategies are declared like this:

struct defer_lock_t { };
struct try_to_lock_t { };
struct adopt_lock_t { };

constexpr std::defer_lock_t defer_lock = std::defer_lock_t();
constexpr std::try_to_lock_t try_to_lock = std::try_to_lock_t();
constexpr std::adopt_lock_t adopt_lock = std::adopt_lock_t();

Apart from these wrappers for mutexes, the standard also provides a couple of methods for locking one or more mutexes.

  • lock: locks the mutexes using a deadlock avoiding algorithm (by using calls to lock(), try_lock() and unlock()).
  • try_lock: tries to call the mutexes by calling try_lock() in the order of which mutexes were specified.

Here is an example of a deadlock case: we have a container of elements and we have a function exchange() that swaps one element from a container into the other container. To be thread-safe, this function synchronizes the access to the two containers, by acquiring a mutex associated with each container.

template <typename T>
class container 
{
public:
    std::mutex _lock;
    std::set<T> _elements;

    void add(T element) 
    {
        _elements.insert(element);
    }

    void remove(T element) 
    {
        _elements.erase(element);
    }
};

void exchange(container<int>& cont1, container<int>& cont2, int value)
{
    cont1._lock.lock();
    std::this_thread::sleep_for(std::chrono::seconds(1)); // <-- simulates the deadlock
    cont2._lock.lock();    

    cont1.remove(value);
    cont2.add(value);

    cont1._lock.unlock();
    cont2._lock.unlock();
}

Suppose this function is called from two different threads, from the first, an element is removed from container 1 and added to container 2, and in the second it is removed from container 2 and added to container 1. This can lead to a deadblock (if the thread context switches from one thread to another just after acquiring the first lock).

int main()
{
    srand((unsigned int)time(NULL));

    container<int> cont1; 
    cont1.add(1);
    cont1.add(2);
    cont1.add(3);

    container<int> cont2; 
    cont2.add(4);
    cont2.add(5);
    cont2.add(6);

    std::thread t1(exchange, std::ref(cont1), std::ref(cont2), 3);
    std::thread t2(exchange, std::ref(cont2), std::ref(cont1), 6);

    t1.join();
    t2.join();

    return 0;
}

To fix the problem, you can use std::lock that guaranties the locks are acquired in a deadlock-free way:

void exchange(container<int>& cont1, container<int>& cont2, int value)
{
    std::lock(cont1._lock, cont2._lock); 

    cont1.remove(value);
    cont2.add(value);

    cont1._lock.unlock();
    cont2._lock.unlock();
}

Hopefully this walktrough will help you understand the basics of the synchronization functionality supported in C++11.

C++11 concurrency: threads

C++11 provides richer support for concurrency than the previous standard. Among the new features is the std::thread class (from the <thread> header) that represents a single thread of execution. Unlike other APIs for creating threads, such as CreateThread, std::thread can work with (regular) functions, lambdas or functors (i.e. classes implementing operator()) and allows you to pass any number of parameters to the thread function.

Let’s see a simple example.

#include <thread>

void func()
{
   // do some work
}

int main()
{
   std::thread t(func);
   t.join();

   return 0;
}

In this example t is a thread object representing the thread under which function func() runs. The call to join blocks the calling thread (in this case the main thread) until the joined thread finishes execution.

If the thread function returns a value, it is ignored. However, the function can take any number of parameters.

void func(int a, double b, const std::string& c)
{
    std::cout << a << ", " << b << ", " << c.c_str() << std::endl;
}

int main()
{
   std::thread t(func, 1, 3.14, "pi");
   t.join();

   return 0;
}

The output is:

1, 3.14, pi

It is important to note that the parameters to the thread function are passed by value. If you need to pass references you need to use std::ref or std::cref.
The following program prints 42.

void func(int& a)
{
   a++;
}

int main()
{
   int a = 42;
   std::thread t(func, a);
   t.join();

   std::cout << a << std::endl;

   return 0;
}

But if we change to t(func, std::ref(a)) it prints 43.

In the next example we execute a lambda on a second thread. The lambda doesn’t do much, except for printing some message and sleeping for a while.

#include <thread>
#include <chrono>
#include <iostream>

auto func = []() {
    std::cout << "thread " << std::this_thread::get_id() << " started" << std::endl;
    std::this_thread::sleep_for(std::chrono::seconds(rand()%10));
    std::cout << "thread " << std::this_thread::get_id() << " finished" << std::endl;
};

int main()
{
    srand((unsigned int)time(0));

    std::thread t(func);
    t.join();

    return 0;
}

The output looks like this (obviously the id of the threads differ for each run).

thread 5412 started
thread 5412 finished

But if we start two threads, the the output looks different:

    std::thread t1(func);
    std::thread t2(func);
    
    t1.join();
    t2.join();
thread thread 10180 started9908 started

thread 10180 finished
thread 9908 finished

The reason is the function running on a separate thread is using std::cout, which is an object representing a stream. This is a shared object of a class that is not thread-safe, therefore the access from different threads to it must be synchronized. There are different mechanisms provided by C++11 to do that (will discuss them in a later post), but one of them is std::mutex (notice there are four flavors of mutexes).

The correct, synchronized code should look like this:

#include <mutex>

std::mutex m;

auto func = []() {
    m.lock();
    std::cout << "thread " << std::this_thread::get_id() << " started" << std::endl;
    m.unlock();

    std::this_thread::sleep_for(std::chrono::seconds(rand()%10));

    m.lock();
    std::cout << "thread " << std::this_thread::get_id() << " finished" << std::endl;
    m.unlock();
};
thread 5032 started
thread 7672 started
thread 5032 finished
thread 7672 finished

In this examples I have used several functions from the std::this_thread namespace (also defined in the <thread> header). The helper functions from this namespace are:

  • get_id: returns the id of the current thread
  • yield: tells the scheduler to run other threads and can be used when you are in a busy waiting state
  • sleep_for: blocks the execution of the current thread for at least the specified period
  • sleep_util: blocks the execution of the current thread until the specified moment of time has been reached

Apart from join() the thread class provides two more operations:

  • swap: exchanges the underlying handles of two thread objects
  • detach: allows a thread of execution to continue independently of the thread object. Detached threads are no longer joinable (you cannot wait for them).
int main()
{
    std::thread t(funct);
    t.detach();

    return 0;
}

What happens though if a function running on a separate thread throws an exception? You cannot actually catch that exception in the thread that’s waiting for the faulty thread, because std::terminate is called and this aborts the program.

If func() throws an exception, the following catch block won’t be reached.

    try 
    {
        std::thread t1(func);
        std::thread t2(func);
    
        t1.join();
        t2.join();
    }
    catch(const std::exception& ex)
    {
        std::cout << ex.what() << std::endl;
    }

What can be done then? To propagate exceptions between threads you could catch them in thread function and store them in a place where they can be lately looked-up. Possible solutions are detailed here and here.

In future posts we will look at other concurrency features from C++11 (such as synchronization mechanisms or tasks).

Split Button Control

What is a split button?

In the previous post we presented the command link button control that is one of the controls introduced with Windows Vista. Another such control is the split button. This is a button that:

  • act like a regular button when pressed, but
  • display a drop-down menu when its drop-down arrow is presses

This is actually a regular button that has one of the windows styles BS_SPLITBUTTON or BS_DEFSPLITBUTTON set.

Split button control

How do add a split button

If you work with Microsoft Visual Studio 2008 or newer then you can find “Split Button Control” in the resource editor toolbox. First step is to drag it from toolbox in the resource dialog template.

Split Button Control

You can set the caption and the handler for BN_CLICKED just like with any regular push button.

void CSplitButton2008Dlg::OnBnClickedSplitCodexpert()
{
   ShellExecute(NULL, _T("open"), _T("http://www.codexpert.ro"), NULL, NULL, SW_SHOWNORMAL);
}

However, for the drop down menu you have to define it explicitly.

Split button drop down menu

To specify what is the menu to be displayed use SetDropDownMenu.

m_buttonCodexpert.SetDropDownMenu(IDR_MENU_CODEXPERT, 0);

Then, add handlers for the menu items and handle the commands.

BEGIN_MESSAGE_MAP(CSplitButton2008Dlg, CDialog)
   /* ... */
   ON_COMMAND(ID_CODEXPERT_FORUM, &CSplitButton2008Dlg::OnCodexpertForum)
   ON_COMMAND(ID_CODEXPERT_BLOG, &CSplitButton2008Dlg::OnCodexpertBlog)
   ON_COMMAND(ID_CODEXPERT_ARTICLES, &CSplitButton2008Dlg::OnCodexpertArticles)
   ON_COMMAND(ID_CODEXPERT_RESOURCES, &CSplitButton2008Dlg::OnCodexpertResources)
END_MESSAGE_MAP()

void CSplitButton2008Dlg::OnCodexpertForum()
{
   ShellExecute(NULL, _T("open"), _T("http://www.codexpert.ro/forum"), NULL, NULL, SW_SHOWNORMAL);
}

void CSplitButton2008Dlg::OnCodexpertBlog()
{
   ShellExecute(NULL, _T("open"), _T("http://www.codexpert.ro/blog"), NULL, NULL, SW_SHOWNORMAL);
}

void CSplitButton2008Dlg::OnCodexpertArticles()
{
   ShellExecute(NULL, _T("open"), _T("http://www.codexpert.ro/articole.php"), NULL, <NULL, SW_SHOWNORMAL);
}

void CSplitButton2008Dlg::OnCodexpertResources()
{
   ShellExecute(NULL, _T("open"), _T("http://www.codexpert.ro/resurse.php"), NULL, NULL, SW_SHOWNORMAL);
}

References

Downloads

Download: SplitButton2008.zip (67 KB)

    <filesystem> header in Visual Studio 2012

    One of the new supported C++ headers in Visual Studio 2012 is <filesystem>. It defines types and functions for working with files and folders. It’s not a C++11 header, but it’s part of the TR2 proposal, hence the namespace it uses, std::tr2::sys. Among others, the header provides functionality for creating, renaming, deleting, or checking the state and type of a path. It also offers support for iterating through the content of a folder. Unfortunately, the MSDN documentation is not that good; it’s rather a reference documentation, missing any examples.

    Here is a simple sample that demonstrates some of the features from this new header.

    #include <iostream>
    #include <filesystem>
    
    int main() 
    {
        std::tr2::sys::path mypath="c:\\temp";
    
        std::cout << "path_exists  = " << std::tr2::sys::exists(mypath) << '\n';
        std::cout << "is_directory = " << std::tr2::sys::is_directory(mypath) << '\n';
        std::cout << "is_file      = " << std::tr2::sys::is_empty(mypath) << '\n';
    
        auto lasttime = std::tr2::sys::last_write_time(mypath);
        char buffer[50] = {0};
        ctime_s(buffer, sizeof(buffer), &lasttime);
        std::cout << "last_write   = " << buffer << '\n';
    
        std::tr2::sys::recursive_directory_iterator endit;
        std::tr2::sys::recursive_directory_iterator it(mypath);
        for(; it != endit; ++it)
        {
            auto& apath = it->path();
    
            if(std::tr2::sys::is_directory(apath) && std::tr2::sys::is_symlink(apath))
            {
                it.no_push();
            }
    
            print(apath, it.level());
        }
    
        return 0;
    }

    And the output is

    path_exists  = 1
    is_directory = 1
    is_file      = 0
    last_write   = Tue Jan 29 21:45:39 2013
    
    +dir1
    ├+dir11
    │├+dir111
    ││├+dir1111
    │││├-file1111.txt
    │││├-file1112.txt
    ││├-file111.txt
    ││├-file112.txt
    │├-file11.txt
    │├-file12.txt
    ├+dir12
    ├-file11.txt
    ├-file12.txt
    ├-file13.txt
    -file1.txt
    -file2.txt
    

    What’s missing from the code above is the print method, but that’s just about formatting details:

    void print(const std::tr2::sys::path& path, int level)
    {
        for(int i = 0; i < level-1; ++i)
            std::cout << (char)179;
    
        if(level > 0)
            cout << (char)195;
    
        if(std::tr2::sys::is_directory(path))
            std::cout << "+";
        else if(std::tr2::sys::is_regular_file(path))
            std::cout << "-";
        else 
            std::cout << " ";
    
        std::cout << path.filename() << std::endl;
    }

    Debugging Tips: Memory Leaks Isolation

    In a recent post we showed how to detect memory leaks in MFC. In this post we present some tips for breaking on a particular allocation that leaks. However, you must note that this technique only works if you are able to find a reproducible allocation, with the same number.

    Here is a memory leak report:

    Detected memory leaks!
    Dumping objects ->
    {183} normal block at 0x0036B320, 128 bytes long.
     Data:  42 61 62 61 20 53 61 66 74 61 00 CD CD CD CD CD 
    Object dump complete.
    The program '[1758] MyTest.exe: Native' has exited with code 0 (0x0).
    

    The allocation number is showed in curly brackets {}, and in this case was 183.

    The steps to be able to break when an allocation that leaks is created are:

    • Make sure you have the adequate reporting mode for memory leaks (see Finding Memory Leaks Using the CRT Library).
    • Run the program several times until you find reproducible allocation numbers ({183} in the example above) in the memory leaks report at the end of running the program.
    • Put a breakpoint somewhere at the start of the program so you can break as early as possible.
    • Start the application with the debugger.
    • When the initial breakpoint is hit, in the watch window write in the Name column: {,,msvcr90d.dll}_crtBreakAlloc, and in Value column put the allocation number that you want to investigate (in my example it would be 183).
    • Continue debugging (F5).
    • The execution stops at the specified allocation. You can use the Call Stack to navigate back to your code where the allocation was triggered.

    Visual Studio 2012 available for download

    Visual Studio 2012 and .NET framework 4.5 became available on 15 August for MSDN subscribers, that can download it from here. Because the new features are discussed in detail in many places I will not attempt to enumerate everything. However, I just want to point some of the new things available for native development.

    • more C++ standard support: includes strongly-typed enums, range-based for loops, stateless lambdas, override and final, as well as new STL headers (<atomic>, <chrono>, <condition_variable>, <filesystem>, <future>, <mutex>, <ratio>, <thread>)
    • C++ compiler enhancements: auto-vectorizor and auto-parallelizer
    • IDE: C++ code-snippets, semantic colorization and (the long awaited) C++/CLI IntelliSense
    • parallel libraries: C++ AMP that allows us to write parallel programs that run on heterogeneous hardware, and new additions to the Parallel Patterns Library (especially in async programming)
    • Windows 8 development: a native XAML framework allows writing apps for WinRT; that is also possible with DirectX (and the two can actually be mixed together)
    • Unit test framework: allows you to write light-weight unit tests for your C++ applications

    On the other hand there is not much done for MFC, that only benefits from a series of bug fixes. the only thing worth noting is reducing the size of statically-linked MFC applications that use “MFC controls”. You can read details about the problem and the solution here.

    More about these can be found in the following articles:

    What you have to note is that at this point VS2012 has some limitations:

    • You cannot target WinXP with this release
    • There is no Express version that allows you to write native C++ apps (for the desktop)

    However, Microsoft has promised to solve these with an upgrade later this autumn (but no dates have been disclosed). You can read about that here:

    C++11: non-member begin() and end()

    One of the addition to the C++ 2011 standard that is perhaps not so much popularized is the non-member begin() and end(). In STL all containers have a non-static member begin() and end() methods that return an iterator to the beginning and the end of the container. Therefor iterating over a container could look like this:

    std::vector<int> v;
    for (auto it = v.begin(); it != v.end(); ++it)
        std::cout << *it << std::endl;

    The problem here is that not all user-defined containers have begin() and end(), which makes them impossible to use with the STL algorithms or any other user-defined template function that requires iterators. That is even more problematic when using C arrays. For instance, using a C array with a standard algorithm is quite different than using a vector, for instance.

    int inc(n) {return n+1;}
    
    int a[] = {1,2,3,4,5};
    std::transform(&a[0], &a[0] + sizeof(a)/sizeof(a[0]), &a[0], inc);
    
    std::vector<int> v(&a[0], &a[0] + sizeof(a)/sizeof(a[0]));
    std::transform(v.begin(), v.end(), v.begin(), inc);

    The non-member begin() and end() methods are extensible, in the sense they can be overloaded for any type (including C arrays). Herb Sutter argues in his Elements of Modern C++ Style article that you should always prefer the non-member version. They promote uniformity and consistency and allow for more generic programming.

    Always use nonmember begin(x) and end(x) (not x.begin() and x.end()), because begin(x) and end(x) are extensible and can be adapted to work with all container types – even arrays – not just containers that follow the STL style of providing x.begin() and x.end() member functions.

    The code above can now look like this:

    std::vector<int> v {1,2,3,4,5};
    for(auto it = begin(v); it != end(v); ++it)
        std::cout << *it << std::endl;
    
    std::transform(begin(v), end(v), begin(v), [](int n){return n+1;});

    As for the C array we can overload begin() and end() to look maybe like this:

    template <typename T, size_t size>
    T* begin(T (&c)[size])
    {
        return &c[0];
    }
    
    template <typename T, size_t size>
    T* end(T (&c)[size])
    {
        return &c[0] + size;
    }

    With this available we can write:

    int a[] = {1,2,3,4,5};
    
    std::transform(begin(a), end(a), begin(a), [](int n) {return n+1;});
    
    for (auto it = begin(a); it != end(a); ++it)
        std::cout << *it << std::endl;

    If you’d argue that non-member begin() and end() break the encapsulation then I suggest you read Scott Meyers’ How Non-Member Functions Improve Encapsulation where he explains the opposite.

    If you’re writing a function that can be implemented as either a member or as a non-friend non-member, you should prefer to implement it as a non-member function. That decision increases class encapsulation. When you think encapsulation, you should think non-member functions.

    There is still one question left: what about the other container functions that return iterators, rbegin(), rend(), cbegin(), cend(), crbegin(), and crend()? Currently they are not implemented as non-member functions. According to Herb Sutter, the omission of cbegin() and cend() was an oversight and it will be fixed.

    Additional readings:

    Cinci ani de CODEXPERT

    Astazi CODEXPERT implineste cinci ani de existenta. Am initiat acest proiect din dorinta de a aduce impreuna pasionati ai programarii din Romania (si nu numai) si a forma o comunitate a dezvoltatorilor pe tehnologii native. Intentia noastra ramane neschimbata si speram ca in viitor sa reusim mai multe decat pana acum.

    Din pacate tehnologiile native (unmanaged) nu beneficiaza de atat de multa publicitate precum cele managed (.NET sau Java). Exista totusi sperante ca odata cu definitivarea noului standard C++ acest lucru sa se schimbe. Trendurile arata ca C++ revine in atentie, inclusiv la marile conferinte organizate de Microsoft, care un deceniu nu a vorbit decat despre .NET.

    Pe parcursul acestor cinci ani site-ul nostru a trecut prin diverse schimbari. In centrul sau s-a aflat intotdeauna forumul de discutii, unde pana acum am discutat peste 2000 de subiecte intr-un total de peste 13000 de mesaje. Am publicat articole, proiecte open-source, resurse pentru programatori precum carti gratis sau tutoriale online. Acum, la implinirea a cinci ani de existenta venim cu cateva noutati:

    1. In primul rand lansam acest blog unde intentionam sa abordam diverse subiecte, de la probleme tehnice la diverse lansari de produse. Pentru a ne adresa unei audiente cat mai mari probabil unele posturi vor fi in limba engleza.
    2. In al doilea rand am reproiectat site-ul principal. Noul format este 100% compatibil HTML5+CSS3, insa aceasta inseamna ca navigarea din browsere vechi precum IE7/IE8 va intampina dificulati. Va recomandam sa folositi o versiune de browser mai recenta care sa afiseze corect documente HTML5.

    Asteptam parerile si sugestiile voastre atat aici cat si in forum.

    Echipa CODEXPERT
    (Marius Bancila & Ovidiu Cucu)