Recently I noticed a post on a social media site, the author was complaining about C++. Their main complaint was how feature rich and confusing the language was. He was obviously not a C++ programmer and overlooked one of C++’s most powerful and best features, compile time logic.
Compile time logic in C++
C++ is able to do certain logic faster than any other language due to processing it at compile time instead of during runtime. This effectively makes the time complexity of your algorithm at run time a constant O(1) (instant) instead of based off of the logic you wrote.
The mechanism behind this is C++’s templating ability. A template is just a pattern for the compiler, when you define the template, you are creating an additional rule for the compiler, not your program. When your program is compiled into machine code there is no reference to templates. This is why you cannot do template operations in a debugger because, it has already been calculated before running.
Now as you saw in part one, we implemented a lot of our logic and structure of our manager through templates, the pros of this is that we get to implement a lot of generic behavior for our manager using compile time strategies.
Implementing Compile Time Logic
Our Manager was designed around our TypeList object as you saw. Everything you know about a TypeList is available at compile time, because it is a templated type. Based on how the source code defines our TypeList we get our templated List of Types and operations on our list of Type during compile time (O(1)). A refresher on the definiton of our TypeList
template <template <class> ... Types>
struct TemplateTypeList {};
As always you can find full examples of the code you’re seeing in the egm repository.
C++ has a lot of tools that they have introduced over the years that aide a lot in doing compile time logic. Here if you haven’t seen it before we use the fold expression (...)
introduced in C++11 to represent multiple types for the programmer to use. Before the fold expression all template logic had to be bound to a max number for the compiler to go to, but with the fold expression, the compiler can determine, just how many types it needs with no need for bounds.
So now the main way to speed up our program, is to take our graph of types defined in the relations of these TypeLists to make some of our logic compile time. The graph I am talking about is that each type has a TypeList of other types representing the bases of the type itself (remember the GenBaseHierarchy
in part 1). So our visitor we want to create will visit this graph in the order we define. Just to further explain the graph I am talking about let me define a quick example below:
template <class ManagerPolicy>
struct Base : public ManagerPolicy {
using Info = TypeInfo<Base>; // no parents
... // root of tree
};
template <class ManagerPolicy>
struct Left : public ManagerPolicy {
using Info = TypeInfo<Left, TemplateTypeList<Base>>;
// this has one parent "Base"
};
template <class ManagerPolicy>
struct Right : public ManagerPolicy {
using Info = TypeInfo<Right, TemplateTypeList<Base>>;
// this has one parent "Base"
};
template <class ManagerPolicy>
struct Derived : public ManagerPolicy {
using Info = TypeInfo<Derived, TemplateTypeList<Right, Left>>;
// this has two parents, right and left
};
So these are EGM compatible types, so the syntax is a little special, but our general graph structure looks like that modeled in the image below on uml.cafe

Here you can see the classic “dangerous” diamond inheritance pattern that we have defined. The goal now is to write a general routine that uses template definitions in our C++ code to produce compile time logic to traverse this graph. That is generally what I mean by a visitor. A visitor generally means some sort of function or procedure in a language to “visit” all parts of a data structure. A common visitor that people like to use is the classic foreach
function. That function usually takes a data structure and another function to execute on each item in that data structure. Now for this article we will be implementing a visitBasesBFS
visitor to show different techniques of visiting and compile time logic.
Visit Bases BFS
So we have all probably learned about BFS and DFS or “breadth first search” and “depth first search” in our algorithms class, or online tutorial. The easiest for us to implement with the design we have used is BFS.
A good way to think about traversing breadth first is to put the children of each node into a queue (last in last out) data structure. So our approach will be to store the types we need to still visit into a typelist that we will modify as a queue.
Okay, so first we want to set up the struct and function for traversal, we define a template class with at first two template parameters, the Visitor type provided by the user using our Visitor, and the Queue which will be our TypeList.
template <class Visitor, class Queue>
struct VisitBasesBFS;
We will now define specializations of this template to expand the TypeList to allow for our subsequent calls to visit the next node in the graph. This is where we implement the queue visiting order. We use a lot of utilities here from egm which I had described in the last article make sure to check that out. Something new here is what I call a DummyManager
type. Now this is mostly a utility type to access type compile time meta info without putting your type into a Manager implementation. We need it specifically to access the type’s BaseList that we had defined above.
// defining aliases for more succinct example :P
template <template <class> class ... Types>
using TT = TemplateTypeList<Types...>;
template <class R, class L>
using Cat = typename TemplateTypeListCat<R,L>::result;
using DB = DummyManager::BaseElement;
// Expand the Queue
template <class Visitor, template <class> class Front, template <class> class ... Rest>
struct VisitBasesBFS<Visitor, TT<Front, Rest...>> {
// define next struct to execute visit in queue order
using Bases = typename Front<DB>::Info::BaseList;
using Next = VisitBasesBFS<Visitor, Cat<TT<Rest...>, Bases>>;
static void visit(Visitor& visitor) {
visitor.template visit<Front>();
Next::visit(visitor);
}
};
// terminating specialization
template <class Visitor>
struct VisitBasesBFS<Visitor, TT<>> {
static void visit(Visitor& visitor) {} // do nothing
};
// wrap it into a template function
template <template <class> class Start, class Visitor>
void visit_BFS(Visitor& visitor) {
VisitBasesBFS<Visitor, TT<Start>>::visit(visitor);
}
Okay so I’m sad to say that after defining all of these aliases and specializations, the above still will not visit our graph properly at compile time. This would work fine if it was only applied to trees but our hierarchy can create that dangerous diamond as shown, which will cause errors in the recursion logic we have defined. A call to our current visit_BFS<Derived>
will visit our Base
type twice. In order to fix that we need to keep track of the nodes that we have already visited so that we don’t call visit<Base>
twice.
To solve our little conundrum we need to alter our template definition of our VisitBasesBFS
. We are going to add another TypeList to keep track of what types we have already visited. Here I use constexpr
which was made available in c++17. This is another feature to use with templates to easily express compile time functionality. Any logic preceded by a constexpr
will be calculated by the compiler. You just need to make sure that all the variables have been made available at compile time. I also use std::conditional
which chooses a type based on the boolean provided.
// some more aliases
template <template <class> class T, class List>
using Index = constexpr TemplateTypeListIndex<T, List>::result;
template <class Visitor, class Queue, class Visited = TT<>>
struct VisitBasesBFS;
template <class Visitor, template <class> class Front, template <class> class ... Rest, class Visited>
struct VisitBasesBFS<Visitor, TT<Front, Rest...>, Visited> {
constexpr bool front_visited = Index<Front,Visited> != -1;
using Bases = typename Front<DB>::Info::BaseList;
using Next = std::conditional<
front_visited, // bool
// front_visited == true
VisitBasesBFS<Visitor, TT<Rest...>, Visited>,
// front_visited == false
VisitBases<Visitor, Cat<TT<Rest...>, Bases>, Cat<TT<Front>,Visited>>
>; // end of Next
static void visit(Visitor& visitor) {
// check compile time if we have visited
if constexpr (!front_visited) {
visitor.template visit<Front>();
}
Next::visit(visitor);
}
};
// terminal type
template <class Visitor, class Visited>
struct VisitBasesBFS<Visitor, TT<>, Visited> {
static void visit(Visitor& visitor) {}
};
That’s all you need! Now we can define a visitor, and we have no overhead from doing our graph traversal logic because it has been dealt with by the compiler so it knows where to go instantly. This is thanks to our conditional type of Next
, where if we haven’t visited the type we add it to the visited list, and if we have visited it we just pop the front from the queue.
An implementation of a visitor here would look like so:
struct ExampleVisitor {
template <template <class> class T>
void visit() {
std::cout <<
"the current type is " <<
T<DB>::Info::name() << std::endl;
}
};
// run our visitor
ExampleVisitor visitor
visit_BFS<Derived>(visitor);
The following code if run should print out:
the current type is Derived
the current type is Right
the current type is Left
the current type is Base
Conclusion
Something fun / eventually it can get annoying is that if you have a lot of types, the compiler can take a LONG time to do it’s logic and translate it into machine code so many times. For some computers the c++ compiler will crash because there is not enough memory available to hold all of that complexity. This is the main drawback of compile time logic but with a smart or segmented implementation this can be avoided.
With c++11 and c++17, as well as more recent features (and future features) implementing this compile time logic in more succinct and straightforward ways is becoming increasingly easy. As far as I know there is no other language that can translate so much logic into pre-compiled machine code as C++, and that is what makes C++ one of the fastest and most powerful languages still to this day.
I thought I could get to our smart pointer implementation in egm which binds the pointer logic to object pool semantics but I thought it would be better to just focus on the visitor for now. I am also glad to say that all of my work (all though now part time because of my new job!) is culminating into a new abstraction update to the uml.cafe api that I am going to for sure be writing about and building onto my first Tutorial.
As always if you have any questions, email me at [email protected] or just leave a comment below! Thanks for reading and see you next time!