We are all more or less familiar with the notion of meta-programming: writing code that generate new code at compile time. The level of interest in such techniques usually ebbs and flows, but, currently, a lot of work is being done around it including smash hits like Boost. Hana, work around new ways to perform type based computation with MPL11, Metal or Brigand. People usually mixes meta-programming with complex invocations of the dual-faced angle brackets god and a lot of thinking using types. A nice change of perspective is to try to see how C++14 and 17 provides new tools to make code generation easier and more amenable to day-to-day practice.
The modern C++ community puts a strong emphasis on value semantics. We have learnt to build types and algorithms thinking in terms of values, their properties, and relationships. However, when it gets to the architecture of big software, we end up growing ad-hoc webs of mutable objects that perform poorly and are hard to understand, debug and maintain.
C++11 added a generalized attribute syntax to annote your code with additional information - basically comments that your compiler will read. It also standardized a couple of attributes. C++14 and 17 went on to add more standardized attributes. But C++17 also added another feature regarding attributes, often overlooked: Compilers are now required to completely ignore attributes they don't know without issuing any diagnostic.
Software keeps changing, but not always as fast as its clients. A key to maintaining a library in the long run is to ensure a proper versioning of the API and ABI. Not only does this gives a clear picture of both source and binary compatibility between the versions, but it also helps design by making breaking changes explicit to the developer.
Do you really think to know C++? Then you probably want to participate in this talk, you will discover the most surprising, weird, strange or really "WTF" language features you could encounter.
There are so many obscure corners of the language that seem to go against common programming intuition. The freedom that C++ gives the programmer may be a double-edged sword; while the user can do many things that have been abstracted away in other languages, it's very easy to shoot ourselves in the foot.
From unintended private member access to unexpected function definitions, in this talk, we will walk you through the quirks that still exist in the language today, and the motivations behind them.
C++ programmers care about performance in every minute of their worklife. As a matter of fact, the central differentiating criterion for using C++ in a software project is speed almost exclusively. But how do we measure it? What tools are out there to help us measure the performance for a given code snippet inside a larger project. What tools are out there to judge the performance of new algorithms or of our collaborators recent contribution? In this talk I would like to address these questions and provide demonstrations and experience report from the field. I'll discuss layman tools as well as open-source tools and finish with proprietary tools. With this talk, I hope to motivate a more vivid discussion in the C++ community on how we measure the speed of our implementations.
Software dependencies are not always obvious in the code, however, they can reduce the quality of your code and increase your build-time enormously. For this reason it is important to understand when dependencies occur and how you can deal with them.
Did you master all of C++? Learnt all the features, read all the proposals and crave for more?
Come dive into some exciting algorithms - tools rare enough to be novel, but useful enough to be found in practice. Want to learn about "Heavy Hitters" to prevent DOS attacks? Come to this talk. Want to avoid smashing your stack during tree destruction? Come to this talk. Want to hear war stories about how a new algorithm saved the day? Come to this talk! We'll dive into the finest of algorithms and see them in use - Fantastic Algorithms, and Where To Find Them!
C++17 adds many new features: structured bindings, deduction guides, if-init expressions, fold expressions, if constexpr and enhanced constexpr support in the standard library. Each of these features are interesting, but what will be their cumulative affect on real code? We'll explore how each feature may (or may not) help in real code for enhanced readability, compile time performance and runtime performance.
A common problem is to map a set of input values to a set of output values, whereas when one or more of the input values change, all dependent output or intermediate values have to be updated accordingly. For instance, consider a radius and angle pair that has to be mapped to Cartesian x,y coordinates but with some temporary sin/cos computation that would only need to be updated when the angle changes but not when the radius changes. Handling these kind of dependencies for a larger set of input/output values is usually quite tedious and error-prone.
Code reviews can be an important instrument to not only improve the quality of our code but also for knowledge transfer. They can be crucial when we develop software with a language as complex as C++ that allows solving problems in multiple different ways. Both junior and senior developers can benefit from code reviews if they are done the right way. Not every line of code needs to be reviewed though, and it is vital to project success to make code reviews efficient by focusing on the right parts of our code.
Futures were added to C++11 as high level abstractions for asynchronous operations and are planed to be enhanced with the upcoming C++17 TS. Severals think today that std::future is broken in multiple ways.
Fibers, green threads, channels, lightweight processes, coroutines, pthreads - there are lots of options for parallelism abstractions. But what do you do if you just want your application to run a specific task on a specific core on your machine? In IncludeOS we have proper multicore support allowing you to do just that in C++: assign a task - for instance a lambda - directly to an available CPU. It will run uninterrupted by context switches or meddling schedulers optimized to please everyone. In this talk we’ll show you how we use CPU cores to do things like TLS decryption under heavy load and handle individual TCP connections. We’ll also explore how direct per-core processing can be combined with threading concepts like C++14 fibers or coroutines, without taking away from the simplicity of getting work done uninterrupted.
Concurrency is notoriously hard to get right. First of all, it introduces a new class of errors. For example, multiple threads operating on shared data may be involved in a data race and synchronisation mechanisms, when not applied correctly, can introduce deadlock. Moreover, the behaviour of a concurrent system depends on the runtime interleaving of threads or processes, which is a source of nondeterminism. This nondeterminism makes concurrent systems hard to reason about and potential bugs hard to find and reproduce using traditional tools.