Protecting your users with certificate pinning

These days a lot of apps are in the news because they are hacked or data from the app has leaked. A breach in your security can not only cost a lot of money but also the trust of your users. In this article I will briefly cover secure network communication, but mostly focus on taking security to the next level with certificate pinning.

Building secure mobile apps is difficult because most of the times your app runs in a untrusted environment. Users are often not aware of security risks and therefor you want to protect them if possible. Public WiFi networks are a great example. If your user is on public WiFi and your app communicates over an un-safe connection (HTTP for example), it’s child’s play to intercept the network traffic. This traffic may contain passwords or other sensitive information that you don’t want anyone else to see.

To prevent this, HTTP Secure (HTTPS) was built. By using HTTPS, communication is encrypted with Transport Layer Security (TLS), formerly known as Secure Sockets Layer (SSL). When your app communicates over HTTPS, messages can still be intercepted, but will not be readable to the interceptor. Using HTTPS instead of HTTP should be a no brainer, some mobile operating systems (like iOS) don’t even allow communicating over HTTP anymore.

Although HTTPS will improve your security vastly, it’s still not bulletproof. By default your OS contains a set of trusted root certificates (e.g.: trusted on iOS 11). If for some reason a compromised certificate is installed on your device or hackers are able to get a valid certificate from a Certificate authority, communication over HTTPS is not so safe anymore. Malicious people will then be able to read or even manipulate your network traffic. This can be done with a Man In The Middle Attack (MITM), ARP spoofing or  DNS spoofing. Certificate pinning can help you prevent these attacks by verifying that the server is responding with the expected certificate.

Certificate pinning

Certificate pinning can be used to verify the integrity of the system you are communicating with. The certificate can be verified in a few different ways:

  • Certificate pinning: This is the easiest way of pinning. At runtime you will compare the server certificate with an embedded certificate, when it doesn’t match the request will fail. A downside of this method, is when the certificate changes, you also need to update your app.
  • Public key pinning: This way of pinning is a bit more trickier because you might need to take some extra steps (depending on the platform) to extract the public key from your server certificate. When the public key is extracted it will be compared with an embedded key in your app and when it doesn’t match the request will fail. Because the public key is static and won’t change when the certificate is renewed (if requested with same certificate request), you don’t need to update your app on certificate renewal. Although this is convenient, some companies might have policies on key rotation and therefor app updates might still be required once in a while.
  • Subject Public Key Info (SPKI) pinning: will verify the fingerprint of the certificate to match a hash of the SPKI. The SPKI consist of the algorithm of the public key, along with the public key. This way of pinning is similar to public key pinning, but uses a different payload. Just like public key pinning, your app doesn’t require an update when a certificate is renewed with the same request.

Besides different ways of pinning, you also have to choose the level of verification. To choose a level, it’s important to have an understanding on how certificates are signed. Every Operating System has a list of companies that are allowed to issue certificates. The companies in this list are called Certificate Authorities and the list itself is often referred to as Trusted Root Certification Authorities Certificate Store. If you visit a website over HTTPS, your OS or browser will verify if the certificate is signed by a CA that is listed in your Trusted Root CA Store. If the CA is in your store, the public key of the certificate will be used to encrypt your communication. If it’s not in the store, the website will be marked as “Not Secure”. With this in mind, it’s clear that you don’t want to install a malicious certificate in your local Trusted Root CA store as this will allow someone to intercept or manipulate your data. With this in mind, choosing a level of verification will also impact the level of security and the number of required updates. The levels you need to choose from are:

  • Leaf: This is the most secure verification because it will only allow the actual certificate of your server. Other certificates issued by the same (valid) CA will not pass verification. If for some reason your private key get’s compromised, your app will be bricked until you’ve updated the embedded certificate.
  • Intermediate: Intermediate verification will verify that the certificate is issued by a specific CA. This will often be the company where you bought your certificate. With this verification you are still relying on the CA to only issue certificates to trustworthy companies. This level of pinning is commonly used because it is secure enough in most cases and allows you to renew certificates without updating the app. If your CA is compromised you still need to update your app, but this is not very likely to happen (except for Diginotar).
  • Root: This is the least secure verification because when you trust the root, you also trust all it’s child CA’s. This approach is not very often used in app development.

Implementing in Xamarin

There are different ways to implement certificate pinning in Xamarin. You can choose for the native approach which will give you fine grained control or you can choose a cross-platform way which is easier to implement, but also has limitations.

Your choice really depends on what HttpClient implementation you are using. If you’re using the Managed HttpClient, you will be able to implement certificate pinning with the ServicePointManager class. This approach is by far the easiest way to implement certificate pinning, but unfortunately doesn’t work with the native HttpClient handlers (like NSUrlSession/CfNetwork or AndroidClientHandler). If you want to use the native handlers, you also have to implement certificate pinning in a native fashion.

Option 1: Cross-platform (ServicePointManager)

The ServicePointManager class exposes a ServerCertificateValidationCallback that will be called every time you make a network request:

ServicePointManager.ServerCertificateValidationCallback += (sender, certificate, chain, sslPolicyErrors) => 
    return _allowedPublicKeys.Contains(certificate?.GetPublicKeyString()); 

In this code sample we are checking if the public key of the server is in the list we’ve embedded in our app. If not, the call wil fail, otherwise it will proceed as it would without pinning. A code sample on intermediate pinning can de found on GitHub.

As previously mentioned, this approach doesn’t work is you’re using the native HttpClient handlers, but there is a workaround with the NuGet package ModernHttpClient. If you install this NuGet package and pass the provided handler to your HttpClient, the ServicePointManager will be triggered while using the native network stack. Sadly, since Xamarin added support for native handlers, this NuGet package isn’t maintained anymore. More info can be found on GitHub.

Option 2: Native (Xamarin.Android) – AndroidClientHandler

On the Android side of things there are a few different ways of implementing certificate pinning. The preferred way is to use Network Security Configuration (NSC). NSC allows you to configure certificate pinning in XML format pretty easily. Unfortunately this requires Android 7.0 (API 24) at a minimum.

Most of the time you also want to support lower versions of Android and therefor NSC is not the best way for now. A commonly used alternative on Android is the OkHttp client, which has some methods to easily verify certificates. There is a binding available on NuGet that can be used in Xamarin.Android, but the documentation is pretty limited. Also, this cannot be used combined with the HttpClient class and therefor you cannot use this in shared code. This is approach might work for native Android, but is not great for Xamarin.Android.

An alternative (and in most cases the best) way to implement certificate pinning is by building your own TrustManager class. This class is responsible for the validation of external parties you’re communicating with. There are a few ways to use your own TrustManager, but I found the easiest way to set your handler on the current TLS context:

private void SetHandler() 
   var algoritm = TrustManagerFactory.DefaultAlgorithm;

   var trustManagerFactory = TrustManagerFactory.GetInstance(algoritm);

   var tm = new ITrustManager[] { new PublicKeyManager() };

   var sslContext = SSLContext.GetInstance("TLS");
   sslContext.Init(null, tm, null);
   SSLContext.Default = sslContext; 
   HttpsURLConnection.DefaultSSLSocketFactory = sslContext.SocketFactory; 

For more info, check the full sample code on GitHub.

Option 2: Native (Xamarin.iOS) –  NSUrlSession

The most common way to implement certificate pinning in Swift / Objective-C is to use the TrustKit. Unfortunately, at this moment, there is no Xamarin binding available for TrustKit.

A different approach is to override the NSUrlSessionHandlerDelegate:DidReceiveChallenge. This method allows you to validate certificates and kill the request if it doesn’t match. Unfortunately the NsUrlSessionHandler from Xamarin uses a private NSUrlSessionHandlerDelegate which makes hard to override the DidReceiveChallenge. It can still be done, but you’ll have to copy the NsUrlSessionHandler code into your project, which is not ideal. Recently CheeseBaron created an issue on GitHub to add support for implementing your own delegate, so this might get fixed in the near future. If you decide to copy the Xamarin classes (until the issue is solved), I recommend looking at Jonathan’s example.

Verification can be done in the DidReceiveChallenge method. Before the end of the method, you need to call the completionHandler to perform or cancel the request:

if (IsValid(serverCertChain))
   // Proceed with the request
   completionHandler(NSUrlSessionAuthChallengeDisposition.PerformDefaultHandling, challenge.ProposedCredential);
} else {
   // Cancel the request
   completionHandler(NSUrlSessionAuthChallengeDisposition.CancelAuthenticationChallenge, null);

The code above will make sure only valid calls may proceed, but the actual validation is done in the IsValid method. This method takes in a parameter of type NSUrlAuthenticationChallenge which can be used to get validate the certificate tree. This implementation of IsValid will verify the public key:

private static bool IsValid(NSUrlAuthenticationChallenge challenge)
   var serverCertChain = challenge.ProtectionSpace.ServerSecTrust;
   var first = serverCertChain[0].DerData;
   var firstString = first.GetBase64EncodedString(NSDataBase64EncodingOptions.None);
   var cert = NSData.FromFile("xamarin.cer");
   var certString = cert.GetBase64EncodedString(NSDataBase64EncodingOptions.None);
   return firstString == certString;


Most of your network calls are made through the HttpClient, but there are a few exceptions. Some (Xamarin Forms) Views like Image or WebView also make requests to a server. For most Views you can create a custom class and load the content of the View with your HttpClient that uses certificate pinning. Views that contain Web content are a bit more complex, because you might also want to verify al links that are loaded inside the (Web)View. To implement this, you’ll need to create custom renderers.

Views approach:

public class SafeImage : Image 
   private readonly SafeService _safeService;
   public SafeImage(SafeService safeService)
      _safeService = safeService;
   public async Task Load(string url) 
      var stream = await _safeService.GetStream(url);
      if (stream != null)
         Source = ImageSource.FromStream(() => stream);

The sample above will make sure that the resource is verified before loading it in the ImageView. As mentioned, to verify WebViews, you’ll need to create custom renderers:

Android custom renderer

To verify all the calls (also inside the WebView), you need create a custom WebViewClient and add this to your WebView Control. The WebViewClient contains a method for you to override called ShouldInterceptRequest. Inside this method you’ll want to load the content with your own HttpClient and then set the loaded content in the Source property of your WebView. A code sample can be found on GitHub.

iOS custom renderer

On iOS a custom implementation of the UIWebViewDelegate can be used to verify all calls from the WebView. This delegate contains a method called ShouldStartLoad that can be overridden for custom validation. If this method returns false, the request will be cancelled. When it returns true, it will continue proceed with the request. A code sample can be found on GitHub.

Get the certificate (public key)

There are a lot of tools to get retrieve a public key from a website or certificate. I’ve found C# to be the easiest way:

var cert = X509Certificate.CreateFromCertFile("{filepath}.cer"); 
var publicKey = cert.GetPublicKeyString();

If you don’t have the .cer file, you can use Google Chrome to download it from your API / website:

  1. Click the  button in your address bar.
  2. Then click “Certificate”.
  3. This will now show the applicable certificates for this site. Select your certificate and drag the certificate icon to your file explorer. This will download the certificate.

How to verify

A very simple verification can be done during development by altering your embedded key, so it becomes invalid. After altering your key, requests should be cancelled as expected.

To really put your pinning to the test in a more practical situation, you can use a proxy tool like FiddlerCharles or Mitmproxy. I’ve found Mitmproxy to be the easiest solution. There is a great blogpost on how to configure Mitmproxy.

For a few dollars, you can also buy the Charles iOS app. This app will work as proxy without having to install software on your computer.

Despite what tool you are using, if certificate pinning is implemented correctly, requests will fail when the proxy is intercepting your requests.

In addition to the above, Kerry W. Lothrop created a great serie of blogposts on App Security and recorded a Xamarin Show with James Montemagno. If you want to know more, see his resources and the resources below. Happy pinning!

Related links

Configuring coding conventions in Visual Studio

When developing software in a team it’s useful to have certain rules on how to write code. Consistency in your code will improve readability and maintainability. With EditorConfig you are able to enforce coding guidelines without having to install additional tools. It even works across IDE’s!

Visual Studio allows you to configure coding style rules. Based on this configuration, Visual Studio will suggest code improvements or show an error. Unfortunately there was no easy way to share this configuration with your team. This is why support for EditorConfig was added to Visual Studio 2017.

With EditorConfig you are able to specify code rules in a .editorconfig file. Visual Studio will automatically check if your code is meets the rules and show violations if it doesn’t. If you already have configured coding rules in your personal preferences, by default EditorConfig will override your rules. Adding your EditorConfig file to source control will allow you to easily share the configuration and have versioning for your guidelines. The configuration can be used for a wide variety of languages and IDE’s. For the IDE’s that are not supported out-of-the-box you can download a plug-in.

Support for EditorConfig is already available in Visual Studio for Windows, and is available in the latest preview of Visual Studio for Mac (7.5 Preview 5). Microsoft is still extending the support and the .NET community is working on EditorConfig to support more .NET features as well. If you want to contribute, please check out the website.

Getting Started

Visual Studio will look for a the .editorconfig in your solution. When a EditorConfig file was detected, Visual Studio will start enforcing the rules on the files at the same hierarchical level and below. This will allow you to override EditorConfig files if needed by adding configuration for different levels. If you want to use a single configuration for the entire solution, simply add “Root=true” to the configuration file. After editing your EditorConfig file, you need to reload your files for the new rules to take effect.

To simplify writing of your EditorConfig files there is the EditorConfig Language Service extension. This will improve IntelliSense, syntax highlighting, visualization and more. The EditorConfig from the Roslyn project might be a good starting point.

Related links

Creating intelligent apps with Computer Vision

After some experimenting with SkiaSharp, I came up with the idea to build a game like Draw It. Normally you draw something and someone needs to guess what you’ve drawn. This time I wanted to take a bit of a different approach. I wanted to draw something with SkiaSharp and put Computer Vision to the test!

Computer Vision is (a small) part of the Cognitive Services that Microsoft offers. The Cognitive Services allow developers without knowledge of machine learning to create intelligent applications. There are 5 different categories of cognitive services (at the moment of writing):

  • Vision: Image and video processing.
  • Knowledge: Mapping complex data to use for recommendations or semantic search.
  • Language: Process natural language to recognise sentiment and determine what users want.
  • Speech: Speaker recognition and real-time translation.
  • Search: More intelligent search engine.

The categories have a lot more to offer than described above. If you want to know more, please visit the Cognitive Services website. There is a 30-day trial available if you want to try it!

Computer Vision 

Computer Vision API is all about analysing images and videos. After sending an image or video, the API returns a broad set of metadata on the visual aspects of the file.  Besides telling you what’s in the picture, it can also tell who’s in the picture. Because it’s not trained to recognize everybody on this planet, this will only work with celebrities. If you’re not recognized by Computer Vision, it can still tell a lot about your picture. It will add tags, captions and even guess your age! It’s really fun to experiment with!

Computer Vision also supports Optical Character Recognition (OCR) what can be used to read text from images. OCR is very useful when you’re building apps that need to interpret paper documents. Handwritten text are still in preview.

If you want to test if Computer Vision suits your needs, you can give it a try on the website without having to code a single line. This demo will only work for basic scenarios (upload an image), if you want to use the more advanced features (like real-time video), I’d suggest you try the API directly.

Getting started

Using Cognitive Services is pretty straightforward because all of the magic is available through REST API’s. For this post I will focus on the Computer Vision API, but the other APIs work in a similar way. You can use the API’s with the following steps:

  1. To start using Cognitive Services you can create a trial account on the website. Click the “Try Cognitive Services for free” button and then click “Get API Key” for the Computer Vision API. After signing in with your account you will get an API key that allows you to create 5000 transactions, limited to 20 per minute. Note: the trial is limited to the westcentralus API (
    If you want to implement Cognitive Service in your production environment, you can obtain an API key from the Azure Portal. Therefor you need to create an Cognitive Services resource of the type you want to use (for example: Computer Vision API). When the resource is created, you can get your keys from the Quick Start -> Keys.
  2. Because Computer Vision is exposed as an REST API, you can call it from your app using HttpClient. Instead of creating your HttpClient, you can also use the NuGet package to make your calls to the Computer Vision API.
    Note: In most cases you don’t want to make the calls directly from your app, but call the Cognitive Services from your server. This will prevent users from stealing your API key (and burning your Azure credits) when they decompile your app or intercept your calls.
  3. After adding the Microsoft.ProjectOxford.Vision to your backend or frontend, you will be able to use Computer Vision with a few lines of code:
    // If you're using a trial, make sure to pass the westcentralus api base URL
    var visionClient = new VisionServiceClient(API_Key, "");
    VisualFeature[] features = { VisualFeature.Tags, VisualFeature.Categories, VisualFeature.Description, VisualFeature.ImageType, VisualFeature. };
    var result = await visionClient.AnalyzeImageAsync(stream, features.ToList(), null); 

    First we create an VisionServiceClient object with an API key and a base URL. Trial keys only work with the WestCentralUS instance and therefor I’m also passing the BaseUrl. When you’ve created a client, you can call AnalyzeImageAsync. This function takes in a Stream (which contains the image) and a list of VisualFeatures. The VisualFeatures specify what you want Computer Vision to analyse. If you prefer to use the HttpClient instead of the NuGet package, I’d recommend the video where René Ruppert explains how to use HttpClient to consume Cognitive Services.


Unfortunately my drawing skills are not good enough for Computer Vision to recognise (with great certainty) what I was drawing. Although it was not sure what I was drawing, it suggested “Outdoor house” with a accuracy of 0.1875. An other tag Computer Vision suggested was “Clock” and I think that’s because the sun looks like a clock.

If you want to give your drawing skills a shot, the code of my experiment is on GitHub. For the drawing part of the application I’ve used some code snippets from the Xamarin.Forms samples. Besides this sample app, Xamarin has a lot of great sample apps on their GitHub.

If you want to learn more about the Computer Vision API or Cognitive Service, make sure to check out the related links below. If you have any questions or suggestions, please don’t hesitate to contact me on Twitter.

Related links:




Drawing with SkiaSharp in Xamarin.Forms

Where most people try to find suitable tools, services and frameworks for the app they are going to build, for me it’s often the other way around. I’m often looking for app ideas to experiment with specific tools, services or frameworks. This time I wanted to give SkiaSharp a try!

In 2005 Google acquired the 2D graphics engine called Skia. The engine is open-source and nowadays used in Google Chrome, Chrome OS, Android, Mozilla Firefox, Firefox OS, and many other products. To extend the use of this engine to the .NET world, Xamarin created a binding called SkiaSharp. This binding can be used in Windows Forms, WPF, UWP, iOS, Android, tvOS, Mac, Xamarin.Forms. SkiaSharp works similar across platforms, but each platform has a few specific classes. This blogpost will focus on Xamarin.Forms since this covers a wide range of platforms and it doesn’t contain a 2D graphics engine itself.

Getting started

  1. First, you need to add the SkiaSharp.Views.Forms NuGet package to your Xamarin.Forms PCL (or .netstandard) project and platform specific projects. This package depends on the SkiaSharp package, and therefor this NuGet package will be added as well.
  2. After adding the package, you are able to use the SkiaSharp Views in your Xaml or in your code. If you want to use SkiaSharp in your Xaml, you first need to define the namespace:

    To use the Views in your code you only need to add the same namespace as a using.

  3. The Views.Forms namespace contains a View called SKCanvasView. As the name suggests, this View will create a canvas to draw on. It’s not possible to draw directly onto the SKCanvasView, but the view exposes an EventHandler called PaintSurface. This handler is called when the canvas needs to be (completely) re-drawn and contains an (very important!) argument of type SKPaintSurfaceEventArgs. Typically the event will be raised during initialisation or on rotation. It’s also possible to trigger the event manually by calling the InvalidateSurface() on the SKCanvasView. Registering your event handler can be done in code or in Xaml. In Xaml this can be achieved with the following code:
    <skia:SKCanvasView PaintSurface="Handle_PaintSurface"/>

    With this Xaml you tell the SKCanvasView to call the Handle_PaintSurface in your code-behind when the canvas needs to be (re-)drawn. The argument contains a property Surface which on his turn contains a property called Canvas. On this Canvas object we can start drawing!

  4. The basics of drawing are pretty simple. You need to specify the coordinates of the object you want to draw and you need to pass an instance of type SKPaint. The SKPaint object defines how your object will look. There are 3 types of styles:
    • Fill: Specify on how to fill an object like for example a circle.
    • Stroke: Specify on how thick and what color the drawing of lines will be.
    • StrokeAndFill: Specifies the styling of the lines drawn and how to fill that object.
  5. A few examples of drawing a simple shape:
    SKPaint blueFill = new SKPaint { Style = SKPaintStyle.Fill, Color = SKColors.Blue };
    SKPaint blackLine = new SKPaint { Style = SkiaSharp.SKPaintStyle.Stroke, StrokeWidth = 10, Color = SKColors.Black };
    // Draws a line from 0,0 to 100,100
    canvas.DrawLine(0, 0, 100, 100, blackLine);
    // Draws a rectangle left=0, top=0, right=300, bottom=150
    canvas.DrawRect(new SKRect(0, 0, 300, 150), blueFill);
    // Draws a circle at 100,100 with a radius of 30
    canvas.DrawCircle(100, 100, 30, blackLine);

Beyond the basics

SkiaSharp has a lot more to offer, but that’s way to much to cover in one blogpost. If you want to take it a step further, the following topics might be interesting:

  • Bitmaps allow you to use images in your Canvas.
  • With Transformations you can modify the coordinates or size of your objects. This is often used in combination with animations. At this moment you can use the following transformations: Translate, Scale, Rotate and Skew.
  • Save() allows you to save the current state of the canvas. With Restore() you can return to the last saved state. This is very useful when you’ve applied Transforms on your Canvas and you want to go back to the previous state.
  • Animations can be created by calling InvalidateSurface(). This will trigger the PaintSurface event that you can use to apply your changes to the canvas.
  • In a lot of cases you might want to write something on your Canvas and therefor DrawText() was created. Skia allows you to draw text with a lot of styling options.
  • If you want to draw a line from point to point (and so on), SKPath might fulfill your needs. Paths can be customised (visually) pretty easy. You can also create curves to smoothen your path.
  • Creating a gradient can be easily achieved with the SkShader class. You can combine this with some Noise to create a nice texture.


Drawing with SkiaSharp is pretty easy and fun, especially when using the Xamarin Live Player. The Live Player allows you to edit your code and instantaneously see your changes. This improves the speed of coding a lot!

SkiaSharp is available for a wide range of platforms, which makes it very powerful. Although you can share a lot of code, you always have to keep in mind what devices sizes and device forms you are developing for.

The documentation on the Xamarin website is great and they also have a lot of samples. I’ve created a sample project myself which contains the basics of drawing with SkiaSharp. The result is shown on this image (only tested on iPhone 8 and Nexus 5). You can find the code on GitHub. Feel free to improve the drawing by creating a PR!

If you have any questions, don’t hesitate to contact me through the contact form or on Twitter.

Related links:


Setting up a Xamarin build with Cake

There are a lot of different ways for setting up your continuous delivery pipeline these days. Platforms like Mobile Center and Bitrise offer a complete solution that can be configured very easily. Another solution that’s gaining more popularity is Cake Build. Cake Build also has a lot to offer, but in quite a different way.

Cake is cross-platform build automation system, based on C#.  It’s open source (on GitHub) and became part of the .NET foundation last year. The possibilities with Cake are endless because you can use all the features that C# has to offer. You can even consume NuGet packages in your build script. Cake also has a few siblings called Jake, Make, Rake and Fake which are similar, but use other languages.

When setting up your build with Cake, you need to create a build file which contains your build steps, written in C#. Cake comes with a lot of default build tasks for you to use in your pipeline. A great advantage of having your configuration in a file, is that you can easily move to an other system or environment without having to reconfigure your entire pipeline. Adding this file to your source repository will also enable versioning on your build script, awesome! Cake build tools work on Windows, Linux and on Mac OS X.

To make the writing of your build scripts a lot more pleasant, there is a Visual Studio Code add-in and an Visual Studio add-in. The add-ins integrate Cake in your IDE and allow you to create the required files with a few clicks. Code completion, Intellisense (recently announced) and syntax highlighting are also great features that the add-in has to offer. If that’s not enough, and your build is still failing for some vague reason, you can even debug your build script to find the issue!

Getting started

When setting up your first Cake build, there are 3 files of importance:

  • Bootstrapper file: will download Cake.exe and all it’s dependencies using a PowerShell script for Windows (build.ps1) or a bash script for Mac OS X/ Linux ( When Cake is already installed with all it’s dependencies, the Cake.exe can also be called directly.
  • Build steps: This file (by default called “build.cake“) contains all the steps that need to be executed for the build to succeed. The build script can be written in a C# Domain-specific language. Because it’s written in C#, you are able to use all the C# features in your script!
    The execution order of the steps can be manipulated by making steps depend on each other, or by setting criteria. This can be achieved by using extensions methods.
  • NuGet packages: All the dependency required for the build to run are defined in a packages config file (tools/packages.config). The listed dependencies will be installed in the tools folder. An example dependency might for example be a unit test runner.

After specifying your build steps and dependencies, you can kick off the build by running the bootstrapper file:

Mac & Linux:



PS> .\build.ps1

Don’t hesitate to give Cake Build a try, you can run it side by side with your current build without modifying your project! My working example is also available on GitHub.

To ease the re-use of build scripts, Cake.Recipe was introduced. Cake.Recipe is a set of build tasks which can be consumed through NuGet. For more info, please visit their website.

Related links:

Code sharing with GIT submodules

When developing software, sharing your code can save up a lot of time and decrease redundancy. To really make this work for your team, it is of great essence to choose a fitting code sharing strategy.

One way to achieve this is by using NuGet to distribute and consume the code in packages. NuGet makes publishing, consuming (a specific version), updating and managing dependencies easy. Although NuGet publishing can be automated, you still need to publish every time when you make a small change in your shared code. When your code is still heavily under construction, this might be less efficient.

These inconveniences were taken in consideration by GIT, and therefor GIT submodules were developed. With GIT submodules it’s possible to clone a (sub)repository in your working repository. This will create a subfolder in your repository where you can use the code from the external repository. Changes made in the submodule are not tracked to the working repository, only to the submodule repository. GIT submodules are especially useful when sharing code across multiple applications. If you only want to share code across platforms, it’s probably more efficient to work in the same repository.

With GIT submodules you’re able to specify in what directory your submodule lives and what version (points to a specific commit) of the shared code you want to use. To explain this in more detail, I will use an example where the (Xamarin) “SubmoduleApp” wants to make use of a “Shared” GIT repository.

  1. First, you need to clone the repository which contains the app (git clone
  2. After you cloned the source code of the app you are working on, you can add your submodule. Make sure you navigated to the correct folder (and switched to the correct branch) before adding the submodule, then run “git submodule add {repoUrl}”:
  3. When this operation finishes, you probably notice that the submodule was cloned into the specified subdirectory. Although this process created a lot of files in our working directory, GIT only detected 2 file changes. We can verify this by running “git status”: All of the cloned files are tracked to the shared repository and therefor aren’t labeled as new. The two files (.gitmodules & Shared) that are added to the SubmoduleApp repository contain information about the submodule:
    • .gitmodule : information about the remote repository of the module.
    • Shared : this file will point to a specific commit of the submodule. The name of the file depends on the name of the directory of your submodule.By changing the hash, you will point to a different version (commit) of the shared repository.
  4. To make sure everyone uses the submodules in the same way, you should make sure to commit and push these files to the root repository.
  5. At this point you’ve successfully created a submodule! If you followed the exact steps as mentioned above, you end up with the following folder structure:With this folder structure it’s fairly easy to create a reference from the TodoPCL.sln to your Shared project and keep your code completely separated.
  6. After making some changes, just navigate to the correct repository folder (Shared or Todo) and commit your changes from there like you did before. Changes will be applied to the correct repository.

You can find this example on GitHub as well. If you don’t like to work in a terminal, I’d recommend SourceTree. SourceTree is a great tool for working with GIT submodules and is available for Windows and Mac.

Related links:

Symbolicating iOS crashes

Sometimes when your app crashes it can be pretty hard to determine what went wrong, especially when you are unable to reproduce and/or debug the crash. Luckily the device also stores crash logs!

Getting the device logs
There are 2 ways to get your crash log: with xCode or with iTunes. Because not every tester has xCode installed I will also list the steps required for iTunes:


  1. Connect device to pc/mac.
  2. Open your device in iTunes and make sure iTunes is synced. This will also transfer your crash logs.
  3. Open the crash log folder:
    1. Windows: %appdata%/Roaming/Apple computer/Logs/CrashReporter/Mobile Device/{your devicename}
    2. Mac: ~/Library/Logs/CrashReporter/MobileDevice/{your devicename}
  4. In this folder you can find the crash logs with the following format “{appname}.crash”.


  5. Connect your device to your Mac.
  6. Open xCode and launch the organizer (Window->Devices or Window->Organizer)
  7. Select your device from the list and click “View device logs”.
  8. Find the crash log based on creation time and right click the entry to export. Click export and save the log to your filesystem.

After pulling the logs from the device, you’ll probably notice that these log files on itself don’t contain a lot of useful information. Bummer!

The logs you pulled from the device are unsymbolicated. This means a lot of technical information isn’t included. You can recognize an unsymbolicated crash log when the function names are missing. Fortunately you are able to add this information by symbolicating your log.

xCode offers tooling to symbolicate your crashes. The location of this tool depends on your version of xCode.

  • xCode 7 & 8:/Applications/
  • xCode6: Applications/
  • xCode5: /Applications/

Before you can execute the symbolicatecrash command, you need to set your Developer_Dir enviroment variable:

export DEVELOPER_DIR="/Applications/"

To symbolicate your file, you need to run the “symbolicatecrash” command and pass the previous mentioned files as params. The -o parameter indicates where to write the symbolicated file to.

symbolicate -o "symbolicatedCrash.txt" "MyAppName 2-12-14, 9-44 PM.crash" ""

Because symbolicating your log requires the exact .app and .DSYM file from the build in which the crash occurred, it’s common to use a crash reporting service. Most of the crash reporting services allow you to upload your app and .DSYM manually or from your continuous delivery pipeline. These services also offer an SDK to catch and log unhandled exceptions (crashes) to their system. Normally you can enable this with one line of code in your app. For example, initializing crash reporting for a Xamarin app with Mobile Center can be done with the following code:

MobileCenter.Start("{Your App Secret}", typeof(Crashes));

In addition to this article, you might also want to check out John Miller’s blogpost on Symbolicating iOS crashes. Great post!

Related links: