Protecting your users with certificate pinning

These days a lot of apps are in the news because they are hacked or data from the app has leaked. A breach in your security can not only cost a lot of money but also the trust of your users. In this article I will briefly cover secure network communication, but mostly focus on taking security to the next level with certificate pinning.

Building secure mobile apps is difficult because most of the times your app runs in a untrusted environment. Users are often not aware of security risks and therefor you want to protect them if possible. Public WiFi networks are a great example. If your user is on public WiFi and your app communicates over an un-safe connection (HTTP for example), it’s child’s play to intercept the network traffic. This traffic may contain passwords or other sensitive information that you don’t want anyone else to see.

To prevent this, HTTP Secure (HTTPS) was built. By using HTTPS, communication is encrypted with Transport Layer Security (TLS), formerly known as Secure Sockets Layer (SSL). When your app communicates over HTTPS, messages can still be intercepted, but will not be readable to the interceptor. Using HTTPS instead of HTTP should be a no brainer, some mobile operating systems (like iOS) don’t even allow communicating over HTTP anymore.

Although HTTPS will improve your security vastly, it’s still not bulletproof. By default your OS contains a set of trusted root certificates (e.g.: trusted on iOS 11). If for some reason a compromised certificate is installed on your device or hackers are able to get a valid certificate from a Certificate authority, communication over HTTPS is not so safe anymore. Malicious people will then be able to read or even manipulate your network traffic. This can be done with a Man In The Middle Attack (MITM), ARP spoofing or  DNS spoofing. Certificate pinning can help you prevent these attacks by verifying that the server is responding with the expected certificate.

Certificate pinning

Certificate pinning can be used to verify the integrity of the system you are communicating with. The certificate can be verified in a few different ways:

  • Certificate pinning: This is the easiest way of pinning. At runtime you will compare the server certificate with an embedded certificate, when it doesn’t match the request will fail. A downside of this method, is when the certificate changes, you also need to update your app.
  • Public key pinning: This way of pinning is a bit more trickier because you might need to take some extra steps (depending on the platform) to extract the public key from your server certificate. When the public key is extracted it will be compared with an embedded key in your app and when it doesn’t match the request will fail. Because the public key is static and won’t change when the certificate is renewed (if requested with same certificate request), you don’t need to update your app on certificate renewal. Although this is convenient, some companies might have policies on key rotation and therefor app updates might still be required once in a while.
  • Subject Public Key Info (SPKI) pinning: will verify the fingerprint of the certificate to match a hash of the SPKI. The SPKI consist of the algorithm of the public key, along with the public key. This way of pinning is similar to public key pinning, but uses a different payload. Just like public key pinning, your app doesn’t require an update when a certificate is renewed with the same request.

Besides different ways of pinning, you also have to choose the level of verification. To choose a level, it’s important to have an understanding on how certificates are signed. Every Operating System has a list of companies that are allowed to issue certificates. The companies in this list are called Certificate Authorities and the list itself is often referred to as Trusted Root Certification Authorities Certificate Store. If you visit a website over HTTPS, your OS or browser will verify if the certificate is signed by a CA that is listed in your Trusted Root CA Store. If the CA is in your store, the public key of the certificate will be used to encrypt your communication. If it’s not in the store, the website will be marked as “Not Secure”. With this in mind, it’s clear that you don’t want to install a malicious certificate in your local Trusted Root CA store as this will allow someone to intercept or manipulate your data. With this in mind, choosing a level of verification will also impact the level of security and the number of required updates. The levels you need to choose from are:

  • Leaf: This is the most secure verification because it will only allow the actual certificate of your server. Other certificates issued by the same (valid) CA will not pass verification. If for some reason your private key get’s compromised, your app will be bricked until you’ve updated the embedded certificate.
  • Intermediate: Intermediate verification will verify that the certificate is issued by a specific CA. This will often be the company where you bought your certificate. With this verification you are still relying on the CA to only issue certificates to trustworthy companies. This level of pinning is commonly used because it is secure enough in most cases and allows you to renew certificates without updating the app. If your CA is compromised you still need to update your app, but this is not very likely to happen (except for Diginotar).
  • Root: This is the least secure verification because when you trust the root, you also trust all it’s child CA’s. This approach is not very often used in app development.

Implementing in Xamarin

There are different ways to implement certificate pinning in Xamarin. You can choose for the native approach which will give you fine grained control or you can choose a cross-platform way which is easier to implement, but also has limitations.

Your choice really depends on what HttpClient implementation you are using. If you’re using the Managed HttpClient, you will be able to implement certificate pinning with the ServicePointManager class. This approach is by far the easiest way to implement certificate pinning, but unfortunately doesn’t work with the native HttpClient handlers (like NSUrlSession/CfNetwork or AndroidClientHandler). If you want to use the native handlers, you also have to implement certificate pinning in a native fashion.

Option 1: Cross-platform (ServicePointManager)

The ServicePointManager class exposes a ServerCertificateValidationCallback that will be called every time you make a network request:

ServicePointManager.ServerCertificateValidationCallback += (sender, certificate, chain, sslPolicyErrors) => 
{ 
    return _allowedPublicKeys.Contains(certificate?.GetPublicKeyString()); 
};

In this code sample we are checking if the public key of the server is in the list we’ve embedded in our app. If not, the call wil fail, otherwise it will proceed as it would without pinning. A code sample on intermediate pinning can de found on GitHub.

As previously mentioned, this approach doesn’t work is you’re using the native HttpClient handlers, but there is a workaround with the NuGet package ModernHttpClient. If you install this NuGet package and pass the provided handler to your HttpClient, the ServicePointManager will be triggered while using the native network stack. Sadly, since Xamarin added support for native handlers, this NuGet package isn’t maintained anymore. More info can be found on GitHub.

Option 2: Native (Xamarin.Android) – AndroidClientHandler

On the Android side of things there are a few different ways of implementing certificate pinning. The preferred way is to use Network Security Configuration (NSC). NSC allows you to configure certificate pinning in XML format pretty easily. Unfortunately this requires Android 7.0 (API 24) at a minimum.

Most of the time you also want to support lower versions of Android and therefor NSC is not the best way for now. A commonly used alternative on Android is the OkHttp client, which has some methods to easily verify certificates. There is a binding available on NuGet that can be used in Xamarin.Android, but the documentation is pretty limited. Also, this cannot be used combined with the HttpClient class and therefor you cannot use this in shared code. This is approach might work for native Android, but is not great for Xamarin.Android.

An alternative (and in most cases the best) way to implement certificate pinning is by building your own TrustManager class. This class is responsible for the validation of external parties you’re communicating with. There are a few ways to use your own TrustManager, but I found the easiest way to set your handler on the current TLS context:


private void SetHandler() 
{
   var algoritm = TrustManagerFactory.DefaultAlgorithm;

   var trustManagerFactory = TrustManagerFactory.GetInstance(algoritm);
   trustManagerFactory.Init((KeyStore)null);

   var tm = new ITrustManager[] { new PublicKeyManager() };

   var sslContext = SSLContext.GetInstance("TLS");
   sslContext.Init(null, tm, null);
   SSLContext.Default = sslContext; 
   HttpsURLConnection.DefaultSSLSocketFactory = sslContext.SocketFactory; 
}

For more info, check the full sample code on GitHub.

Option 2: Native (Xamarin.iOS) –  NSUrlSession

The most common way to implement certificate pinning in Swift / Objective-C is to use the TrustKit. Unfortunately, at this moment, there is no Xamarin binding available for TrustKit.

A different approach is to override the NSUrlSessionHandlerDelegate:DidReceiveChallenge. This method allows you to validate certificates and kill the request if it doesn’t match. Unfortunately the NsUrlSessionHandler from Xamarin uses a private NSUrlSessionHandlerDelegate which makes hard to override the DidReceiveChallenge. It can still be done, but you’ll have to copy the NsUrlSessionHandler code into your project, which is not ideal. Recently CheeseBaron created an issue on GitHub to add support for implementing your own delegate, so this might get fixed in the near future. If you decide to copy the Xamarin classes (until the issue is solved), I recommend looking at Jonathan’s example.

Verification can be done in the DidReceiveChallenge method. Before the end of the method, you need to call the completionHandler to perform or cancel the request:


if (IsValid(serverCertChain))
{
   // Proceed with the request
   completionHandler(NSUrlSessionAuthChallengeDisposition.PerformDefaultHandling, challenge.ProposedCredential);
} else {
   // Cancel the request
   completionHandler(NSUrlSessionAuthChallengeDisposition.CancelAuthenticationChallenge, null);
}

The code above will make sure only valid calls may proceed, but the actual validation is done in the IsValid method. This method takes in a parameter of type NSUrlAuthenticationChallenge which can be used to get validate the certificate tree. This implementation of IsValid will verify the public key:


private static bool IsValid(NSUrlAuthenticationChallenge challenge)
{
   var serverCertChain = challenge.ProtectionSpace.ServerSecTrust;
   var first = serverCertChain[0].DerData;
   var firstString = first.GetBase64EncodedString(NSDataBase64EncodingOptions.None);
   var cert = NSData.FromFile("xamarin.cer");
   var certString = cert.GetBase64EncodedString(NSDataBase64EncodingOptions.None);
   return firstString == certString;
}

Views

Most of your network calls are made through the HttpClient, but there are a few exceptions. Some (Xamarin Forms) Views like Image or WebView also make requests to a server. For most Views you can create a custom class and load the content of the View with your HttpClient that uses certificate pinning. Views that contain Web content are a bit more complex, because you might also want to verify al links that are loaded inside the (Web)View. To implement this, you’ll need to create custom renderers.

Views approach:


public class SafeImage : Image 
{
   private readonly SafeService _safeService;
   public SafeImage(SafeService safeService)
   {
      _safeService = safeService;
   }
 
   public async Task Load(string url) 
   { 
      var stream = await _safeService.GetStream(url);
      if (stream != null)
      { 
         Source = ImageSource.FromStream(() => stream);
      } 
   } 
}

The sample above will make sure that the resource is verified before loading it in the ImageView. As mentioned, to verify WebViews, you’ll need to create custom renderers:

Android custom renderer

To verify all the calls (also inside the WebView), you need create a custom WebViewClient and add this to your WebView Control. The WebViewClient contains a method for you to override called ShouldInterceptRequest. Inside this method you’ll want to load the content with your own HttpClient and then set the loaded content in the Source property of your WebView. A code sample can be found on GitHub.

iOS custom renderer

On iOS a custom implementation of the UIWebViewDelegate can be used to verify all calls from the WebView. This delegate contains a method called ShouldStartLoad that can be overridden for custom validation. If this method returns false, the request will be cancelled. When it returns true, it will continue proceed with the request. A code sample can be found on GitHub.

Get the certificate (public key)

There are a lot of tools to get retrieve a public key from a website or certificate. I’ve found C# to be the easiest way:


var cert = X509Certificate.CreateFromCertFile("{filepath}.cer"); 
var publicKey = cert.GetPublicKeyString();

If you don’t have the .cer file, you can use Google Chrome to download it from your API / website:

  1. Click the  button in your address bar.
  2. Then click “Certificate”.
  3. This will now show the applicable certificates for this site. Select your certificate and drag the certificate icon to your file explorer. This will download the certificate.

How to verify

A very simple verification can be done during development by altering your embedded key, so it becomes invalid. After altering your key, requests should be cancelled as expected.

To really put your pinning to the test in a more practical situation, you can use a proxy tool like FiddlerCharles or Mitmproxy. I’ve found Mitmproxy to be the easiest solution. There is a great blogpost on how to configure Mitmproxy.

For a few dollars, you can also buy the Charles iOS app. This app will work as proxy without having to install software on your computer.

Despite what tool you are using, if certificate pinning is implemented correctly, requests will fail when the proxy is intercepting your requests.

In addition to the above, Kerry W. Lothrop created a great serie of blogposts on App Security and recorded a Xamarin Show with James Montemagno. If you want to know more, see his resources and the resources below. Happy pinning!

Related links

Creating intelligent apps with Computer Vision

After some experimenting with SkiaSharp, I came up with the idea to build a game like Draw It. Normally you draw something and someone needs to guess what you’ve drawn. This time I wanted to take a bit of a different approach. I wanted to draw something with SkiaSharp and put Computer Vision to the test!

Computer Vision is (a small) part of the Cognitive Services that Microsoft offers. The Cognitive Services allow developers without knowledge of machine learning to create intelligent applications. There are 5 different categories of cognitive services (at the moment of writing):

  • Vision: Image and video processing.
  • Knowledge: Mapping complex data to use for recommendations or semantic search.
  • Language: Process natural language to recognise sentiment and determine what users want.
  • Speech: Speaker recognition and real-time translation.
  • Search: More intelligent search engine.

The categories have a lot more to offer than described above. If you want to know more, please visit the Cognitive Services website. There is a 30-day trial available if you want to try it!

Computer Vision 

Computer Vision API is all about analysing images and videos. After sending an image or video, the API returns a broad set of metadata on the visual aspects of the file.  Besides telling you what’s in the picture, it can also tell who’s in the picture. Because it’s not trained to recognize everybody on this planet, this will only work with celebrities. If you’re not recognized by Computer Vision, it can still tell a lot about your picture. It will add tags, captions and even guess your age! It’s really fun to experiment with!

Computer Vision also supports Optical Character Recognition (OCR) what can be used to read text from images. OCR is very useful when you’re building apps that need to interpret paper documents. Handwritten text are still in preview.

If you want to test if Computer Vision suits your needs, you can give it a try on the website without having to code a single line. This demo will only work for basic scenarios (upload an image), if you want to use the more advanced features (like real-time video), I’d suggest you try the API directly.

Getting started

Using Cognitive Services is pretty straightforward because all of the magic is available through REST API’s. For this post I will focus on the Computer Vision API, but the other APIs work in a similar way. You can use the API’s with the following steps:

  1. To start using Cognitive Services you can create a trial account on the website. Click the “Try Cognitive Services for free” button and then click “Get API Key” for the Computer Vision API. After signing in with your account you will get an API key that allows you to create 5000 transactions, limited to 20 per minute. Note: the trial is limited to the westcentralus API (https://westcentralus.api.cognitive.microsoft.com/vision/v1.0).
    If you want to implement Cognitive Service in your production environment, you can obtain an API key from the Azure Portal. Therefor you need to create an Cognitive Services resource of the type you want to use (for example: Computer Vision API). When the resource is created, you can get your keys from the Quick Start -> Keys.
  2. Because Computer Vision is exposed as an REST API, you can call it from your app using HttpClient. Instead of creating your HttpClient, you can also use the NuGet package to make your calls to the Computer Vision API.
    Note: In most cases you don’t want to make the calls directly from your app, but call the Cognitive Services from your server. This will prevent users from stealing your API key (and burning your Azure credits) when they decompile your app or intercept your calls.
  3. After adding the Microsoft.ProjectOxford.Vision to your backend or frontend, you will be able to use Computer Vision with a few lines of code:
    // If you're using a trial, make sure to pass the westcentralus api base URL
    var visionClient = new VisionServiceClient(API_Key, "https://westcentralus.api.cognitive.microsoft.com/vision/v1.0");
    VisualFeature[] features = { VisualFeature.Tags, VisualFeature.Categories, VisualFeature.Description, VisualFeature.ImageType, VisualFeature. };
    var result = await visionClient.AnalyzeImageAsync(stream, features.ToList(), null); 

    First we create an VisionServiceClient object with an API key and a base URL. Trial keys only work with the WestCentralUS instance and therefor I’m also passing the BaseUrl. When you’ve created a client, you can call AnalyzeImageAsync. This function takes in a Stream (which contains the image) and a list of VisualFeatures. The VisualFeatures specify what you want Computer Vision to analyse. If you prefer to use the HttpClient instead of the NuGet package, I’d recommend the video where René Ruppert explains how to use HttpClient to consume Cognitive Services.

Results

Unfortunately my drawing skills are not good enough for Computer Vision to recognise (with great certainty) what I was drawing. Although it was not sure what I was drawing, it suggested “Outdoor house” with a accuracy of 0.1875. An other tag Computer Vision suggested was “Clock” and I think that’s because the sun looks like a clock.

If you want to give your drawing skills a shot, the code of my experiment is on GitHub. For the drawing part of the application I’ve used some code snippets from the Xamarin.Forms samples. Besides this sample app, Xamarin has a lot of great sample apps on their GitHub.

If you want to learn more about the Computer Vision API or Cognitive Service, make sure to check out the related links below. If you have any questions or suggestions, please don’t hesitate to contact me on Twitter.

Related links:

 

 

 

Symbolicating iOS crashes

Sometimes when your app crashes it can be pretty hard to determine what went wrong, especially when you are unable to reproduce and/or debug the crash. Luckily the device also stores crash logs!

Getting the device logs
There are 2 ways to get your crash log: with xCode or with iTunes. Because not every tester has xCode installed I will also list the steps required for iTunes:

iTunes

  1. Connect device to pc/mac.
  2. Open your device in iTunes and make sure iTunes is synced. This will also transfer your crash logs.
  3. Open the crash log folder:
    1. Windows: %appdata%/Roaming/Apple computer/Logs/CrashReporter/Mobile Device/{your devicename}
    2. Mac: ~/Library/Logs/CrashReporter/MobileDevice/{your devicename}
  4. In this folder you can find the crash logs with the following format “{appname}.crash”.

    xCode

  5. Connect your device to your Mac.
  6. Open xCode and launch the organizer (Window->Devices or Window->Organizer)
  7. Select your device from the list and click “View device logs”.
  8. Find the crash log based on creation time and right click the entry to export. Click export and save the log to your filesystem.

After pulling the logs from the device, you’ll probably notice that these log files on itself don’t contain a lot of useful information. Bummer!

The logs you pulled from the device are unsymbolicated. This means a lot of technical information isn’t included. You can recognize an unsymbolicated crash log when the function names are missing. Fortunately you are able to add this information by symbolicating your log.

Symbolicating
xCode offers tooling to symbolicate your crashes. The location of this tool depends on your version of xCode.

  • xCode 7 & 8:/Applications/Xcode.app/Contents/SharedFrameworks/DVTFoundation.framework/Versions/A/Resources/symbolicatecrash
  • xCode6: Applications/Xcode.app/Contents/SharedFrameworks/DTDeviceKitBase.framework/Versions/A/Resources/symbolicatecrash
  • xCode5: /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/Library/PrivateFrameworks/DTDeviceKitBase.framework/Versions/A/Resources/symbolicatecrash

Before you can execute the symbolicatecrash command, you need to set your Developer_Dir enviroment variable:

export DEVELOPER_DIR="/Applications/Xcode.app/Contents/Developer"

To symbolicate your file, you need to run the “symbolicatecrash” command and pass the previous mentioned files as params. The -o parameter indicates where to write the symbolicated file to.

symbolicate -o "symbolicatedCrash.txt" "MyAppName 2-12-14, 9-44 PM.crash" "MyAppName.app"

Services
Because symbolicating your log requires the exact .app and .DSYM file from the build in which the crash occurred, it’s common to use a crash reporting service. Most of the crash reporting services allow you to upload your app and .DSYM manually or from your continuous delivery pipeline. These services also offer an SDK to catch and log unhandled exceptions (crashes) to their system. Normally you can enable this with one line of code in your app. For example, initializing crash reporting for a Xamarin app with Mobile Center can be done with the following code:


MobileCenter.Start("{Your App Secret}", typeof(Crashes));

In addition to this article, you might also want to check out John Miller’s blogpost on Symbolicating iOS crashes. Great post!

Related links:

Talking Bluetooth (Low Energy) with Xamarin

With the rising popularity of IoT (Internet of Things), it becomes more common that you need to communicate with hardware. In most cases you can accomplish this with network connectivity, but you might want to consider Bluetooth Low Energy (BLE) as well. As the name suggests, BLE uses a lot less energy in comparison to classic Bluetooth. Less energy consumption means that it’s possible to use smaller (and portable) batteries what might be very useful for IoT devices. When deciding if BLE suites your needs, you should take a few things in consideration:

  • Bandwidth: Bluetooth is less suitable for transmitting large sets of data, especially BLE.
  • Costs: in comparison to network adapters, bluetooth is more affordable.
  • Range: the range of your device really depends on the bluetooth device and version that is used by the hardware and the smartphone/tablet. The environment might also impact the range. At a maximum you can reach 100 meters, but the average will be around 15 meters.
  • Power consumption: you can use BLE to save energy, but this will limit the throughput. If you are transferring small packages this might be interesting. The classic way of bluetooth communication is also less consuming than network connectivity.

Instead of writing the bluetooth code for every platform, you can choose to use a plugin that provides an abstraction layer so it’s possible to access BLE from shared code. At the time of writing, I find Bluetooth LE Plugin for Xamarin the best pick for the job. The plug-in is easy to implement and is continuously getting updates. If you want to use classic bluetooth you might want to look into some other plugins. in this post I will focus on Bluetooth LE Plugin for Xamarin.

When working with BLE, there are 3 (most important) different bluetooth abstraction layers:

  • Services: A service contains one or more characteristics. For example, you could have a service called “Heart Rate service” that includes characteristics such as “heart rate measurement.” Each service has it’s unique pre-defined 16-bit or 128-bit UUID.
  • Characteristics: A characteristic is some kind of endpoint for a specific part of the service. Just like a service, the characteristic also has a UUID. Characteristics support a range of different interactions: read, write, notify, indicate, signedWrite, writableAuxilliaries, broadcast.
  • DescriptorsDescriptors are defined attributes that describe a characteristic value. For example, a descriptor might specify a human-readable description, an acceptable range for a characteristic’s value, or a unit of measure that is specific to a characteristic’s value.

Now, let’s dive into some code samples! To give you a taste of what the plugin has in store for you, some snippets:

Scan for BLE devices (advertisements)
The Adapter class makes it possible to detect devices in the surrounding area. You can simply get an instance of the adapter and start scanning with the following code:

 var adapter = CrossBluetoothLE.Current.Adapter;
adapter.DeviceDiscovered += (s,a) =+ deviceList.Add(a.Device);
await adapter.StartScanningForDevicesAsync(); 

Connect to device
When you’ve found the device you want to connect to, you are able to initiate a connection with the device. After connecting successfully to a device you are able to get the available characteristics and start sending requests or start retrieving notifications from the device. When the bluetooth device doesn’t receive any requests for a specific period (may differ per device), it will disconnect to save battery power. After getting a device instance (with the previous sample), it’s fairly easy to setup a connection:

 try
{
    await _adapter.ConnectToDeviceAsync(device);
}
catch(DeviceConnectionException e)
{
// ... could not connect to device
}

Note: a device scan is only necessary when connecting to the device for the first time. It’s also possible to initiate a connection based on the UUID of the device.

Get services, characteristics, descriptors
As described above, a service is the first abstraction layer. When an instance of a service is resolved, the characteristics of this specific service can be requested. The same goes for a characteristic and his descriptors. On a characteristic you can request the descriptors.

The different abstraction layers can be requested in a similar way, GetAll or GetById:

// Get All services and getting a specific service
var services = await connectedDevice.GetServicesAsync();
var service = await connectedDevice.GetServiceAsync(Guid.Parse("ffe0ecd2-3d16-4f8d-90de-e89e7fc396a5"));

// Get All characteristics and getting a specific characteristic
var characteristics = await service.GetCharacteristicsAsync();
var characteristic = await service.GetCharacteristicAsync(Guid.Parse("37f97614-f7f7-4ae5-9db8-0023fb4215ca"));

// Get All descriptors and getting a specific descriptor
var descriptors = await characteristic.GetDescriptorsAsync();
var descriptor = await characteristic.GetDescriptorAsync(Guid.Parse("6f361a84-eeac-404c-ae48-e65b9cba6af8"));

Send write command
After retrieving an instance of the characteristic you’re able to interact with it. Writing bytes for example:

await characteristic.WriteAsync(bytes);

Send read command
You can also request information from a characteristic:

var bytes = await characteristic.ReadAsync();

Pretty easy, right? To get you started, you can find an example project on the Bluetooth LE Plugin GitHub page. Although this seems pretty simple,  I’ve experienced some difficulties in making it rock solid. Some tips to improve your code:

  • Run bluetooth code on main thread.
  • Don’t scan for devices and send commands simultaneously, also don’t send multiple commands simultaneously. A characteristic can only execute one request at a time.
  • Adjust ScanMode to suit your use case.
  • Don’t store instances of characteristics or services.

Related links:

Microsoft Build: The future of Xamarin

On the second day of the Microsoft Build conference, during the Keynote, Microsoft had some awesome announcements related to Mobile development. In a following session James Montemagno and Miguel De Icaza got into more detail and even showed more epic new features. As always to two put on a great show while presenting!

Together with my colleague Marco, I’ll cover announcements made at the Microsoft Build conference in a series of blogposts. You can also read these posts on his blog.

Xamarin Live Player
When Xamarin Live Player (XLP) was announced during the Build keynote, most of the audience was pleasantly surprised. Until now developing on Windows for iOS was a serious hassle because it required a Mac to compile the app. This setup works well when you have an expensive Mac and a fast and reliable network, but in a lot of scenarios these builds take up a lot of time.

With XLP you don’t need a Mac build host anymore! You just install the Xamarin Live Player app from the App Store or Play Store, pair your device in Visual Studio and boom! You’re ready to debug on the device. Just select your device (annotated with “Player”) en hit run:

Besides the build improvements XLP has more to offer! You can also modify your Xamarin.Forms or Xamarin.Classic app while debugging so you can easily test changes to your app without stopping and restarting the debug process.
This will really speed up the development! To achieve this you first have to open the view you want to debug live. For now a Xamarin.Forms Xaml and iOS storyboards are supported, when XLP becomes GA all types of views should be supported. After opening your view file, you can run the with Visual Studio by clicking “Run -> Live Run Current View” on Mac or “Tools -> Xamarin Live Player -> Live Run Current View” on Windows.

XLP is still in preview and after some testing we are still experiencing a few difficulties. If you want to give XLP a try, you’re probably best of using the examples provided by James Montemagno.

Fastlane
If you’ve ever developed an app for iOS you probably felt the pain of having to provision your app in the Apple Developer Portal. Although there is a good (and very long) read on how to accomplish this, you don’t want to waste any time on this. This is where Fastlane comes to the rescue. With Fastlane tools installed you are able to provision your app with a few clicks in Visual Studio! Fastlane also comes in handy when releasing your app to the Stores. It handles all tedious tasks, like generating screenshots, dealing with code signing, and releasing your application.

Xaml standard 1.0
During the Build Conference Microsoft also introduced a new standard for XAML, called XAML Standard 1.0. The standard will describe a specification for cross-platform user interfaces for both UWP and Xamarin.Forms. As for now, XAML Standard is in a very early stage and Microsoft encourages developers to contribute to their specification. 1.0 should become available later this year.

Bindings simplified
Xamarin was already supporting usage of native libraries with Android binding and iOS binding projects. Because of the complex nature of binding projects, Xamarin released the CocoaPods Importer which will simplify importing CocoaPods with a few clicks. With this importer you can easily transform a CocoaPods library to a .NET library.

iOS Extensions
When you’ve created extensions for your app, these extensions will run in a separate process. Therefor the Xamarin runtime was included for every extension. With the latest version of Xamarin, the runtime is only included once for all your extensions. This will result in a smaller app size and improved performance. The latest version of Visual Studio also allows multi-process debugging, that will come in handy when debugging your app combined with your extensions.

AOT for Mac and Android, Hybrid mode
The compilation process of Xamarin.Android and Xamarin.iOS apps was quite different until now. One big difference was that Android was compiled Just In Time, iOS was compiled Ahead Of Time. After the compilation of iOS apps, the LVVM optimizes the generated code to get high performance. Xamarin announced to also start supporting AOT for Android and Mac apps, that will result in better performance. When you compile AOT you might lose some flexibility (for example: no dynamic code loading), and therefor Xamarin also introduced an Hybrid mode. The Hybrid mode will work for platforms that support AOT as well as JIT. It will compile AOT, but also add the bits needed for JIT.

Embedinator-4000

Embedinator-4000​
While the Xamarin platform mostly focuses on writing code in C# and compiling this to a native application, the Embedinator takes a bit of a different approach. The Embedinator makes it possible to write your code in C# and then compile this into a framework library for Java, Swift, Objective-C or C/C++. Therefor you can easily share your C# code with truly native apps. Miguel De Icaza promised that more languages will be added. As for now, the Embedinator can be used from command line, in the future this will be integrated in Visual Studio.

SkiaSharp
SkiaSharp is a cross-platform 2D graphics API for .NET platforms based on Google’s Skia Graphics Library (which is used by apps like Google Chrome). Xamarin announced that it will be supporting Android, iOS, Mac, UWP, tvOS and Windows. Also support for SVG files is added in the latest release. James showed an awesome demo of The Kimono designer which is build on top of SkiaSharp.

Xamarin.IoT
The latest alpha of Xamarin also contains support for Linux based IoT devices! It will add a new project template for Xamarin.IoT and allow you to deploy and debug your IoT app remotely. Debugging will work similar to debugging Xamarin applications. The IoT Hub SDK will make it easy to communicate with your Azure IoT Hub. The library supports communication with MQTT and AMQP message queues.

Mobile Center
Microsoft is putting a lot of effort in making Mobile Center a full fletched CI/CD solution. With the latest release they added support for UWP and the ability to send push notifications to Android or iOS applications. Mobile Center is still in preview, but is expected to become GA later this year.

Project Rome for iOS
Last year Microsoft announced Project Rome SDK for Android. This year at the Build conference Microsoft also announced the iOS SDK. Project Rome creates a bridge between the different platforms for a seamless user experience. The SDK allows developers to start and control apps over platforms and devices.

If you have any comments or want to share your experience, don’t hesitate to contact me on Twitter.

Related links:

 

Xamarin: UITest backdoor

When running your UI tests, locally or in Xamarin Test Cloud, you sometimes want to execute code in your app from your tests. You might for instance want to setup some mock implementations instead of making actual API calls, or you might want to trigger some OS specific code from your tests. In these scenario’s backdoors come in handy!

Backdoors itself are not very complex, but it opens up a lot of possibilities with UI tests. In the past I used backdoors to:

  • Setup authentication for tests
  • Create a specific state in your app to run your tests in.
  • Set mock implementations from my UI tests.
  • Trigger deeplink

And I can think of a lot more useful scenario’s. You can create a backdoor with the following code:

iOS
In your AppDelegate class you need to create an method that returns NSString, is public,  is annotated with Export attribute and takes an NSString as parameter. For example:

public class AppDelegate : UIApplicationDelegate
{
 [Export("myBackdoorMethod:")] // notice the colon at the end of the method name
 public NSString MyBackdoorMethod(NSString value)
 {
 // In through the backdoor - do some work.
 }
}

Android
On Android you need to create a method in your MainActivity that is public, returns string/void, is annotated with Export attribute and takes an string/int/bool as parameter:

public class MainActivity : Activity
{
    [Export("MyBackdoorMethod")]
    public void MyBackdoorMethod(string value)
    {
        // In through the backdoor - do some work.
    }
}

Invoking the backdoor from your UI tests:

if (app is Xamarin.UITest.iOS.iOSApp)
{
    app.Invoke("myBackdoorMethod:", "exampleString");
}
else
{
    app.Invoke("MyBackdoorMethod", "exampleString");
}

Note: The signature of the backdoor methods is very important as the Backdoors – Xamarin docs mention.

If you want to switch out your actual implementations for mocks, the use of compiler flags (combined with build configurations) might be more suited. You can read more on this over here.

Related links:

Build configurations and compiler flags

If you have developed some Xamarin applications in the past, you might have stumbled on the excessive range of configuration options you can choose from. The settings you’re using in development may be very different from your Release settings. For example; you don’t want to include debug information in the app that will be published to the App Stores. It will increase the app size enormously and expose technical info about your app that might be useful to malicious people.

Besides the difference of Release and Debug configurations, you sometimes need a extra set of configurations. Visual Studio allows you to create additional build configurations:

Configuration Manager

Configuration Manager

Build configurations

Build configurations

Compiler flags / Conditional compilation symbols

Per build configuration you can also specify a set of compiler flags. Compiler flags on itself don’t have any impact on your app, but you can use these flags in your code. By default the build configuration called Debug, has a compiler flag called “Debug”. In your code you can check if this flag is set (and therefor Debug configuration is active) and execute some Debug code. A common scenario is to Log additional information when the Debug flag is set.

Compiler flags

Setting compiler flags

Using compiler flag in code:

#if Debug
	InitializeUiTests();
#endif

Related links: