[Developer's community]

C# on steroids or whatโ€™s new in C# 7.0?

 

21 September 2017
Hello

Before talking about new stuff in C# and .NET Framework overall, it would be good to see how the platform evolved over the time (e.g. since the first release in 2002). A lot has been changed since, meaning that .NET became platform-agnostic and very mature platform for Enterprise-grade software development.

Figure 1. C# Language evolution

 As you see from the scheme, every version brought something really exciting. Along with new features and .NET Core appearance, it also triggered certain confusion in versioning, platform/framework names, framework’s capabilities, etc. We’ll clarify what all this is all about first in the second part of this topic. Let’s start with the interesting part.

 

Part 1. C# ver.7 and the future prospect

 

Binary Literals

Numeric literals are not a new concept in C#. We have been able to define integer values in base-10 and base-16 since C# was first released. Common uses case for base-16 (also known as hexadecimal) literals is to define flags and bit masks in enumerations and constants. Since each digit in a base-16 number is 4 bits wide, each bit in that digit is represented by 1, 2, 4, and 8.

While this is familiar to most, using C#7 we can now express such things explicitly in base-2, more commonly referred to as binary. While hexadecimal literals are prefixed with 0x , binary literals are prefixed with 0b:

const int MyValue = 0b00_00_11;

 

Local functions

Those who are familiar with JavaScript will be familiar with local functions; the ability to define a method inside of another method. We have had a similar ability in C# since anonymous methods were introduced albeit in a slightly less flexible form1. Up until C#7, methods defined within another method were assigned to variables and were limited in use and content. Local functions allow us to declare a method within the scope of another method; not as an assignment to a run-time variable as with anonymous methods and lambda expressions, but as a compile-time symbol to be referenced locally by its parent function:

private static void UseLocalFunction() 
{ 
uint Collatz(uint value) => value % 2 == 1 ? (3 * value + 1) / 2 : value / 2;
}

 

‘Out’ variables

How often have you written code like this?

int dummy;
if (int.TryParse(someString, out dummy) && dummy > 0)
{
   //TODO
}

Meaning that typically with ‘out’ variables you need to declare the variable first, then pass it into the method. Now you can do that all in one line:

if (int.TryParse(value, out var thirdWay))

‘Out’ variables are part of a wider set of features for reducing repetition (in written code and in run-time execution), and saying more with less (i.e. making it easier for us to infer intent from the code without additional commentary). This is a very simply addition to C# syntax, yet useful.

 

 

Tuples and deconstruction

Tuples are a temporary grouping of values. You could compare a Tuple to a POCO-class, but instead of defining it as a class you can define it on the fly. The following is an example of such a class:

class MyProperty
{
    public int Id {get; set;}
    public string Name {get; set;}
}
var myObj = new PropertyBag { Id = 1, Name = "test};

In the above example it wasn’t really necessary to name the concept we’re working with as it is probably a temporary structure that doesn’t need naming. Tuples are a way of temporarily creating such structures on the fly without the need to create classes. The most common reason for having a group of values temporarily grouped are multiple return values from a method. Currently, there are a few ways of doing that in C#: out-parameters, tuple-type, class/struct. All these approaches have certain variables (please refer to the previous paragraph to read about out-parameter for example).

You can specify multiple return types for a function, in much the same syntax as you do for specifying multiple input types (method arguments):

public (double lat, double lng) GetLatLng(string address) { ... }
var ll = GetLatLng("some address"); 
Console.WriteLine($"Lat: {ll.lat}, Long: {ll.lng}");

NOTE: You need to reference System.ValueTuple for Tuples to work. You can also turn an object into a tuple if it implements Deconstruct().

 

 

Pattern matching

As we’ve seen so far, C# 7.0 introduces some nice new things, like "out" variables and binary literals. These are all great little additions, but it has truly cool and long-awaited features in its arsenal, like pattern matching.

I can admit that I was a bit confused as to why this is called "pattern matching." I read the Wikipedia article and expected to understand this more if I were a Computer Science major instead of Computer Systems Engineering. According to Wikipedia:

“In computer science, pattern matching is the act of checking a given sequence of tokens for the presence of the constituents of some pattern.”

The What's New in C# 7 documentation explains:

“Pattern matching is a feature that allows you to implement method dispatch on properties other than the type of an object”.

Well, this is a bit confusing, isn’t it? In my understanding “Pattern matching” means that you can switch on the type of data you have to execute one or the other statement. Although pattern matching looks a lot like if/else, it has certain advantages:

  •  You can do pattern matching on any data type, even your own, whereas if/else you always need primitives to match
  • Pattern matching can extract values from your expression

In C# 7.0 we are enhancing two existing language constructs with patterns:

  • is expressions can now have a pattern on the right hand side, instead of just a type
  • case clauses in switch statements can now match on patterns, not just constant values

In future versions of C# we are likely to add more places where patterns can be used.

The following is an example of pattern matching:

class Geometry();
class Triangle(int Width, int Height, int Base) : Geometry;
class Rectangle(int Width, int Height) : Geometry;
class Square(int width) : Geometry;
 
Geometry g = new Square(5);
switch (g)
{
    case Triangle(int Width, int Height, int Base):
        WriteLine($"{Width} {Height} {Base}");
        break;
    case Rectangle(int Width, int Height):
        WriteLine($"{Width} {Height}");
        break;
    case Square(int Width):
        WriteLine($"{Width}");
        break;
    default:
        WriteLine("<other>");
        break;
}

In the sample above you can see how we match on the data type and immediately de-structure it into its components.

 

Ref Locals and Ref Returns

C#7 brings a variety of changes to how we get output from the methods, specifically, out variables, tuples, and ref locals and ref returns. We’ve already covered out” in this topic, so let's take a look at ref locals and ref returns. Like the changes to async return types, this feature is all about performance.

NOTE: The addition of ref locals and ref returns enable algorithms that are more efficient by avoiding copying values, or performing dereferencing operations multiple times.

Like many performance-related issues, it is difficult to come up with a simple real world example that is not entirely contrived. So, suspend your engineering minds for a moment and assume that this is a perfectly great solution for the problem at hand so that I can explain this feature to you. Imagine it is Halloween and we are counting how many pieces of candy we have collectively from all of our heavy bags of deliciousness. We have several bags of candies with different candy types and we want to count them. So, for each bag, we group by candy type, then retrieve current count of each candy type, add the count of that type from the bag, and then store the new count:

void CountCandyInBag(IEnumerable<string> bagOfCandy)
{
    var candyByType = from item in bagOfCandy
                      group item by item;
 
    foreach (var candyType in candyByType)
    {
        var count = _candyCounter.GetCount(candyType.Key);
        _candyCounter.SetCount(candyType.Key, count + candyType.Count());
    }
}
 
class CandyCounter
{
    private readonly Dictionary<string, int> _candyCounts = new Dictionary<string, int>();
 
    public int GetCount(string candyName)
    {
        if (_candyCounts.TryGetValue(candyName, out int count))
        {
            return count;
        }
        else
        {
            return 0;
        }
    }
 
    public void SetCount(string candyName, int newCount)
    {
        _candyCounts[candyName] = newCount;
    }
}

This example works just fine, but it has an overhead; we have to look up the candy count value in our dictionary multiple times when retrieving and setting the count. However, by using ref returns, we can create an alternative to our dictionary that minimizes that overhead. However, we also cannot use a local variable as we cannot return a reference to a value that does not live beyond the method call, so we must modify how we store our counts:

void CountCandyInBag(IEnumerable<string> bagOfCandy)
{
    var candyByType = from item in bagOfCandy
                      group item by item;
 
    foreach (var candyType in candyByType)
    {
        ref int count = ref _candyCounter.GetCount(candyType.Key);
        count += candyType.Count();
    }
}
 
class CandyCounter
{
    private readonly Dictionary<string, int> _candyCountsLookup = new Dictionary<string, int>();
    private int[] _counts = new int[0];
 
    public ref int GetCount(string candyName)
    {
        if (_candyCountsLookup.TryGetValue(candyName, out int index))
        {
            return ref _counts[index];
        }
        else
        {
            int nextIndex = _counts.Length;
            Array.Resize(ref _counts, nextIndex + 1);
            _candyCountsLookup[candyName] = _counts.Length - 1;
            return ref _counts[nextIndex];
        }
    }
}

Now we are returning a reference to the actual stored value and changing it directly without repeated look-ups on our data type, making our algorithm performing better. Be sure to check out the official documentation for alternative examples of usage.

NOTE: I want to take a moment to point out a few things about the syntax. This feature uses the ref keyword a lot. You have to specify that the return type of a method is ref, that the return itself is a return ref, that the local variable storing the returned value is a ref, and that the method call is also ref. If you skip one of these uses of ref, the compiler will let you know, but as I discovered when writing the examples, the message is not particularly clear regarding how to fix it. Not only that, but you may get caught out when trying to consume by-reference returns as you can skip the two uses at the call-site. e.g.:  

int count =_candyCounter.GetCount(candyName);

in such a case, the method call will be as if it were a regular, non-reference return, so, watch out!

 

  

Generalized async return types

Returning a Task object from async methods can introduce performance bottlenecks in certain paths. "Task" is a reference type, so using it means allocating an object. In cases where a method declared with the async modifier returns a cached result or completes synchronously, the extra allocations can become a significant time cost in performance-critical sections of code. It can become very costly if those allocations occur in tight loops.

The new language feature means that async methods may return other types in addition to TaskTask<T> and void. The returned type must still satisfy the async pattern, meaning a GetAwaiter method must be accessible. As one concrete example, the ValueTask type has been added to the .NET framework to make use of this new language feature:

public async ValueTask<int> Func()
{
    await Task.Delay(100);
    return 5;
}

NOTE: You need to add the NuGet package System.Threading.Tasks.Extensions in order to use the ValueTask<TResult> type.

A simple optimization would be to use ValueTask in places where Task would be used before. However, if you want to perform extra optimizations by hand, you can cache results from async work and reuse the result in subsequent calls. The ValueTask struct has a constructor with a Task parameter so that you can construct a ValueTask from the return value of any existing async method:

public ValueTask<int> CachedFunc()
{
    return (cache) ? new ValueTask<int>(cacheResult) : new ValueTask<int>(LoadCache());
}
private bool cache = false;
private int cacheResult;
private async Task<int> LoadCache()
{
    // simulate async work:
    await Task.Delay(100);
    cacheResult = 100;
    cache = true;
    return cacheResult;
}

NOTE: As with all performance recommendations, you should benchmark both versions before making large scale changes to your code. Feel free to use BenchmarkDotNet library for the purpose that can be found in GitHub repository here.

 

 

Expression-bodied Members 

With C#6 we’ve got expression-bodied members, which allowed us to express simple methods using lambda-like syntax. However, this new syntax was limited to methods and read-only properties. With the first ever community contribution to C#, C#7 expands this syntax to cover constructors, finalizers, and property accessors.

If we take the property example we had before, containing our throw expression as part of property set accessor, we can now write it as:

public object SomeProperty
{
    get => _someProperty;
    set => _someProperty = value ?? throw new ArgumentNullException();
}

I won't bother with examples for constructors or finalizers; the main documentation is pretty clear on those and I am not convinced the syntax will be used very often in those cases. Constructors are rarely so simple that the expression-bodied syntax makes sense, and finalizers are so rarely needed that most of us will not get an opportunity to write one at all, expression-bodied or otherwise.

 

 

Throw expressions

It is easy to throw an exception in the middle of an expression: just call a method that does it for you! But in C# 7.0 we are directly allowing throw as an expression in certain places:

class Person
{
    public string Name { get; }
    public Person(string name) => Name = name ?? throw new ArgumentNullException(nameof(name));
    public string GetFirstName()
    {
        var parts = Name.Split(" ");
        return (parts.Length > 0) ? parts[0] : throw new InvalidOperationException("No name!");
    }
    public string GetLastName() => throw new NotImplementedException();
}

 

The future directions

The new features have been added constantly to the language. Microsoft and the community are working not only on 7.2, but also version 8 of C#. The best resource to track all the future features or suggestions would be GitHub page here. My next topic will be dedicated to the new features in C# 7.1 which are exciting itself. Stay tuned!

 

 

Part 2. The theory

 

.NET Framework is a software framework developed by Microsoft that runs primarily on Microsoft Windows. It includes an extensive class library named Framework Class Library (FCL) and provides language interoperability (each language can use code written in other languages) across several programming languages. Programs are written for .NET Framework execute in a software environment (in contrast to a hardware environment) named Common Language Runtime (CLR), an application virtual machine that provides services such as security, memory management, and exception handling. (As such, computer code written using .NET Framework is called "managed code.") FCL and CLR together constitute .NET Framework.

According to Figure 1, it wasn’t much, but I know Enterprise-level systems written in .NET 1.1 Microsoft has started .NET Framework development in the late 90s and they did a great job. It was revolutionary (to some extent). I remember reading Fritz Onion’s book “Essential ASP.NET”[1] and the way he admired the new approach in web development using this platform. Fortunately, Microsoft didn’t stop at this step, and in 2006 they released 2.0. It has lots of exciting changes in Generics, Static classes, Attributes, Partial types, etc., e.g., the features, without which I can’t imagine modern development.

I think it was a little bit too much for the intro and the aim is not to describe all the changes in all .NET Framework editions. I think Version 6 was a real break-through, i.e., exactly that release where Microsoft has shifted towards open-source and introduces compiler-as-a service, aka “Roslyn” (please see Figure 1 for the reference).  Traditionally, compilers are black boxes – source code goes in one end, magic happens in the middle, and object files or assemblies come out the other end. As compilers perform their magic, they build up a deep understanding of the code they are processing, but that knowledge is unavailable to anyone but the compiler implementation wizards. The information is promptly forgotten after the translated output is produced. For decades, this worldview has served us well, but it is no longer sufficient. Increasingly we rely on integrated development environment (IDE) features such as IntelliSense, refactoring, intelligent rename, “Find all references,” and “Go to definition” to increase our productivity. We rely on code analysis tools to improve our code quality and code generators to aid in application construction. As these tools get smarter, they need access to more and more of the deep code knowledge that only compilers possess. This is the core mission of Roslyn: opening up the black boxes and allowing tools and end users to share in the wealth of information compilers have about our code.

The .NET Compiler Platform ("Roslyn") provides open-source C# and Visual Basic compilers with rich code analysis APIs. You can build code analysis tools with the same APIs that Microsoft is using to implement Visual Studio! It is hard to underestimate open-source approach, but if you would ask ‘what is it for?’, I’d refer you to an excellent blog-post by Alex Turner “Use Roslyn to Write a Live Code Analyzer for Your API.”

 

The next stop is .NET Core. I believe it was the most long-awaited feature and the platform overall. The journey has started with project “Mono” (the first attempt to make .NET platform-agnostic) and found its logical resolution in .NET Core, where the platform was completely rewritten from ground-up (as Microsoft and partners state) and not .NET Framework alternative (as Wikipedia definition says, even though it shares some of the .NET Frameworks API’s).

.NET Core is the next generation of ASP.NET that provides a familiar and modern framework for web and cloud scenarios. It includes the next version of ASP.NET MVC, Web API, Web Pages and SignalR. It is a high-performance and modular design and supports full side by side to make it seamless to migrate from on-premise to the cloud. These products are actively developed by the ASP.NET team in collaboration with a community of open source developers.

The following characteristics best define .NET Core:

  • Flexible deployment: Can be included in your app or installed side-by-side user- or machine-wide
  • Cross-platform: Runs on Windows, macOS, and Linux; can be ported to other operating systems. The supported Operating Systems (OS), CPUs and application scenarios will grow over time, provided by Microsoft, other companies, and individuals
  • Command-line tools: All product scenarios can be exercised at the command-line
  • Compatible: .NET Core is compatible with .NET Framework, Xamarin, and Mono, via the .NET Standard
  • Open source: The .NET Core platform is open source, using MIT and Apache 2 licenses. Documentation is licensed under CC-BY. .NET Core is a .NET Foundation project
  • Supported by Microsoft: .NET Core is supported by Microsoft, per .NET Core Support

Attending different conferences (as a speaker and a guest) I hear the same question quite often “What does it mean for Developers? Will Microsoft stop be supporting .NET Framework?” etc. In short - no. These are two different paradigms, development models, and frameworks that can be used all together though. You may ask, how is that possible? Well, there is a leverage you may never be heard about, which is “.NET Standard”.

 

.NET Standard solves the code sharing problem for .NET developers across all platforms by bringing all the APIs that you expect and love across the environments that you need: desktop applications, mobile apps & games, and cloud services:

  • .NET Standard is a set of APIs that all .NET platforms have to implement. This unifies the .NET platforms and prevents future fragmentation
  • .NET Standard 2.0 will be implemented by .NET Framework, .NET Core, and Xamarin. For .NET Core, this will add many of the existing APIs that have been requested
  • .NET Standard 2.0 includes a compatibility shim for .NET Framework binaries, significantly increasing the set of libraries that you can reference from your .NET Standard libraries
  • .NET Standard will replace Portable Class Libraries (PCLs) as the tooling story for building multi-platform .NET libraries

 

Sounds good, but what does it mean for me?

The .NET platform was forked quite a bit over the years. On the one hand, this is actually a really good thing. It allowed tailoring .NET to fit the needs that a single platform wouldn’t have been able to. For example, the .NET Compact Framework was created to fit into the (fairly) restrictive footprint of phones in the 2000 era. The same is true today: Unity (a fork of Mono) runs on more than 20 platforms. Being able to fork and customize is an important capability for any technology that requires a reach. But on the other hand, this forking poses a massive problem for developers writing code for multiple .NET platforms because there isn’t a unified class library to target.

There are currently three major flavors of .NET, which means you have to master three different base class libraries in order to write code that works across all of them. Since the industry is much more diverse now than when .NET was originally created it’s safe to assume that we’re not done with creating new .NET platforms. Either Microsoft or someone else will build new flavors of .NET in order to support new operating systems or to tailor it for specific device capabilities. This is where the .NET Standard comes in (see Figure 2):

Figure 2. .NET Standard library

With .NET Standard 2.0, Microsoft focuses on compatibility. In order to support .NET Standard 2.0 in .NET Core and UWP, they’ll be extending these platforms to include many more of the existing APIs. This also includes a compatibility shim that allows referencing binaries that were compiled against the .NET Framework.

NOTE: From what I heard at the latest VSLive! Conference this year, Microsoft would be focusing equally on .NET Framework, .NET Core and Xamarin without any preferences among them. It means that the new APIs will be delivered equally quick and more or less simultaneously for all of them.

In .NET Core apps (either ASP.NET Core apps or Console apps, as of today) there are new possibilities like being able to run your app (like an ASP.NET Core app) on top of the .NET Core Platform or on top of the traditional .NET Framework 4.5.x+ which is critical for many enterprise apps that still might not have all the libraries/components compiled for .NET Core available (custom or third party). To do so, you can use so-called “Target platform monikers” (TFMs). The Target Framework Monikers are IDs of the type framework + version that you can target from your apps in .NET Core and ASP.NET Core.  As examples (there are more), you can use:

  • “netcoreapp2.0” For .NET Core 2.0
  • “net461”, “net47” for .NET Framework versions 
  • "portable-net462+win8" for PCL profiles
  • "dotnet5.6", "dnxcore50" and others, for older .NET Core preview versions (Before .NET Core 1.0 RTM and .NET Core 2.0 were released)
  • “netstandard1.5”, “netstandard2.0”, etc. for .NET Standard Platform monikers.

 You can also use pre-compiler directives to target traditional framework:

#if NET462
  // access something that requires traditional .NET Framework (like Windows-related stuff)
#endif

 

Closing

It is recommended to use .NET Core whenever is possible for your requirements (as it is cross-platform) and full .NET Framework (which might only work on Windows depending on the APIs being used, plus full .NET framework is heavier) only when you have no other choice because the API you want to use doesn’t exist in .NET Core.
Take into account that the purer .NET Core you use, the easier you’ll have it for modern scenarios. Not just cross-platform across Linux and Windows but also environments on Docker containers which are a lot better/lighter when running on pure .NET Core images than full Windows Containers.


Sure, if your .NET Framework libraries don’t use any particular Windows API, it might also run okay on Linux, but, the more libraries you have on .NET Standard Library and .NET Core projects, the better for the future.
๐Ÿ™‚

 

 


[1] Don’t know who is Fritz Onion? Fritz is a co-founder of Pluralsight where he serves as the Content Advisor. Fritz is the author of the book 'Essential ASP.NET' published by Addison Wesley.

 

References:

 

 

SQL Server Database Export/Import operation in MS Azure

 

10 August 2017
The beauty of Azure is that it has multiple ways to do the same thing. Imagine the scenario;&amp;nbsp; y

The beauty of Azure is that it has multiple ways to do the same thing. Imagine the scenario;  you need to copy the database between subscriptions. Considering SQL Server Import/Export procedure, you can do that either from the UI, or use command line tools (like SQLPackage).

The way it works with Azure UI, you go to the SQL Server database you want to export, and you have an option available (circled in red below). It will export the database to the selected Blob storage (you need to have storage account created prior this operation in the same subscription).

In the case of the same subscription but different SQL Server instances (or if your Azure subscriptions belong to the same tenant), you can Import this database (the actual *.bacpac file) using the same storage account, as it will be visible for all of them. If these are two completely different subscriptions, but you still need to Export/Import this database, you may need to create a storage account in the second subscription and import it from there. For the purpose, go to the SQL Server instance, and you will see the “Import database” option as highlighted below:

To import database, you need to select the file (*.bacpac) from the storage account and pricing tier (along with collation, admin name, and password).

TIP: When you download the database backup file (*.bacpac) from the storage account, it will appear as a zipped archive. No worries about that, as the bacpac file is a zip archive in reality.

There is nothing wrong with this approach, but in this case, the import procedure takes quite long time (as for me) for small databases and much longer for larger ones. The import process is hidden from the user, and the only message you will receive is that “Request is submitted to import database”:

You can control it to some extent by clicking “Operations” tile on the same page below (that is called “Import/Export history”) but it doesn’t update in real time. The error messages (if you would have any) are not that informative as well:

If you need to perform the same task quicker, automate it in the script (using PS or Azure CLI), or add more visibility into the process, the right way to go is to use SQLPackage command-line utility.

As stated in the documentation, to Export/Import using SQLPackage utility, you can use the relevant parameters and properties. The utility should be installed on your machine if you used SQL Server Management Studio (the latest one) or Visual Studio Tools for SQL Server. You can also install it directly from the MS website here.

The command to Export database I have used is quite simple:

Code:

SqlPackage.exe /a:Export /tf:<filename>.bacpac /scs:"Data Source=<server name>.database.windows.net;Initial Catalog=<database name to export>; User id=<user id>; Password=<password>"

TIP: You can find the SQLPackage util using the following path: “C:\Program Files (x86)\Microsoft SQL Server\<version>\DAC\bin”. If you will be using the util from there, the exported file will be dropped into the same directory. The better way is to add SQLPackage.exe to Path in the Environment variables, so you can call anywhere using the command line.

Import operation is as simple as export. Please find the code below:

Code:

SqlPackage.exe /a:import /tcs:"Data Source=<destination server name>.database.windows.net;Initial Catalog=<db name>;User Id=<server admin>;Password=<admin password>" /sf:<file name>.bacpac /p:DatabaseEdition=Standard /p:DatabaseServiceObjective=S1

It is worth mentioning that the *.bacpac will be picked from the same directory where SQLPackage.exe lies (unless you used the tip above). “User Id” and “Password” should be SQL Server admin’s credentials in Azure. Database edition corresponds to SQL Server editions (the full list of possible values: Basic|Standard|Premium|Default), the same parameters can be found in the documentation mentioned above. Database Service Objective corresponds to SQL Server pricing tiers. Possible values are: Basic, S0-S3 (Standard tier), P1/P2/P4/P6/P11/P15 (that belong to Premium tier), PRS1-2/4/6 (IO and compute-intensive instances, still in preview).

Like it was mentioned before, this way is quicker and gives you more control and visibility over the process. The error messages are more informative in this case as well:

So, using these two simple commands, you can Export/Import database for the same or different Azure subscription without the overhead and automate the process if necessary, or use SQLPackage as part of your script for some other tasks.

 

Creating a Service Principal for VSTS endpoint

 

29 June 2017
Imagine a situation where you have to manage multiple Azure subscriptions (that belong to different

Imagine a situation where you have to manage multiple Azure subscriptions (that belong to different tenants) not only for your organization but also for your company’s clients. In software development process, you can host VSTS (Visual Studio Team Services) on your organization’s Azure account and deploy to customers’ subscription (which is quite a wide-spread use case). Subscription management in Azure designed in such a way you cannot have two similar products assigned to you. For instance, you cannot have two Pay-as-you-go subscriptions assigned to your account (or two BizSpark or Enterprise subscriptions, etc.) The resources from different subscriptions can be assigned to you, but you can’t manage them directly from your account, as they belong to different tenants. What if you need to build a delivery pipeline in VSTS and don’t see a target subscription (to deploy your application to) on the list? Well, it could be a problem, but before calling Microsoft’s support, let’s see what we can do with it.

Fortunately, you can create custom service connection, by clicking ‘Manage’ next to Azur subscriptions drop-down (see the screenshot above). When you go to ‘New Service Endpoint’ -> ‘Azure Resource Manager,' you will see a familiar list of subscriptions assigned to your account:

The dialog suggests: ‘If your subscription is not listed above, or your account is not backed by Azure Active Directory or to specify an existing Service Principal, use the full version of the endpoint dialog.’ See the screenshot below:

The dialogue seems complicated (at a glance) with many fields to fill-out. Let’s see what we can do:

Let's start filling this form out :)

  • Connection name is the simplest – put any suitable name in here
  • Environment – leave a default value (unless you’re in China, Germany or US Gov)
  • Subscription ID – the id of the target subscription (you deploy to). Go to Azure -> Subscriptions, select and copy subscription ID from there. Or, you can use Azure CLI and enter:
azure login
azure account list

Copy the ID:

  • Subscription Name – opposite to Id (see the previous step)
  • Tenant ID – Go to Azure -> Active Directory -> Properties blade -> Copy ‘Directory ID’ from there. Alternatively, as long as you’re already logged in in the Azure CLI, you can use the following command:
azure account show
  • Service Principal Client ID – is a bit more complicated. You won’t find this value anywhere. The only way is to create a Service Principal that will be assigned to this endpoint. To do so, we need to use Azure CLI (or PowerShell):
azure ad sp create -n <app> -p <password>

Where <app> is random app name (doesn’t really matter) and <password> - is a password for the Service Principal (and Service Principal Key at the same time). This command will create an SP:

The value you need is ‘Service Principal Names.' The last step is to assign Service Principal to a Role. Using Service Principal Name, execute the following the CLI command to assign SP to a Contributor role:

azure role assignment create --spn 990ffeff-0016-4535-809c-79db18336db4 -o Contributor

As long as done with this, click ‘Verify Connection’ link on the form and once it verified, you will be able to use this Endpoint to connect to the client’s subscription.

 

What I learned at Microsoft Build 2017 - Part 2

 

18 May 2017
Containers galore As you may see from the first part of this topic, a lot of stuff is going on aroun

Containers galore

As you may see from the first part of this topic, a lot of stuff is going on around containers, containerization, and everything that relates to it. Fortunately, it wasn’t the only topic of the conference and continuing the story, and I’m going to talk about the other areas worth attention.

Microservices!

Yes, I know what you think – heard about/use it already, so what’s new? Fortunately, the key in Cesar’s presentation was not reinventing the wheel ๐Ÿ˜Š the key is containers (what! … again?). As I have already mentioned, Microsoft focuses a lot of efforts around Cloud and Service Fabric along with Azure Functions in particular. “It works on my local machine! ...Why not in production?”. If this expression sounds familiar to you, I highly recommend watching this presentation. Docker helps to automate the deployment of applications as portable, self-sufficient containers that can run on any cloud (or on-premises). From the practical standpoint, the most attractive to the developers was a demonstration of sample .NET Core reference application, powered by Microsoft and based on a simplified microservices architecture and Docker containers. This reference application demonstrated by Cesar, proposes a simplified microservice oriented architecture implementation to introduce technologies like .NET Core with Docker containers through a comprehensive application (which is eShop, that should sound familiar to the most of the developers).

 

What’s interesting about this demo app – it offers different microservices types, meaning different Architecture patterns approaches, depending on the purpose, as shown on the schema above. According to this Architecture - EventBus works on top of RabbitMQ and can potentially be replaced with another Bus technology that can come into your mind, like Service Fabric, nServiceBus, MassTransit, etc. I would suggest keeping an eye on this project, as the guys might fork it in the future to target some specific microservice cluster/orchestrators, using additional cloud infrastructure (Azure Container Service and mentioned many times already DC/OS, Kubernetes, Docker Swarm), which is very interesting from the practical standpoint.

The project itself can be found on GitHub. That’s not all, Cesar also shared an eBook (Architecting & Developing series) that uncovers the details of Microservices Architecture for Containerized .NET Applications. It can be downloaded from here for free (no registration required).

You can find the link to this presentation below (“Microservices Architecture with ASP.NET Core”).

Developing on Windows Server

This is the last word about containers, I promise ๐Ÿ˜Š

It was a good presentation by Taylor Brown (PM, Windows Server) and Steve Lasker (PM, Visual Studio/Containers). The goal for the guys obviously was to make developers love using Windows Server. Long story short – every (true) developer likes creation process. Coping with outdated stuff does not deliver that much pleasure, especially adding new features or maintaining it. But the reality is simple – many applications can’t be just simply thrown away (for many reasons). Fortunately, there is a way out, or at least, the situation can be made easier by adding new tools and code to existing apps. How so? Containerize existing app!

  • Containerize it for portability/efficiency and reliability
  • Transform monoliths to microservices (adding new code and transforming existing one)
  • Accelerate the process (by using agile cloud native app development)

The use case is simple – with Azure Service Fabric and Windows Containers you can push new features immediately, roll them back if necessary and implement the new ones with greater confidence.

What is good, once Visual Studio 2017 released it supported Docker (debugging and testing of the app in Docker containers, break points debugging, Docker assets scaffolding etc.)

I encourage you to use mentioned patterns to cope with existing applications or greenfield developments and watch recorded presentation where Steve demonstrates how all that stuff works (the link, as usual, can be found at the end of this topic).

Another interesting feature worth mentioning (especially regarding managing existing applications), we can now create Docker images for existing artifacts. So, roughly speaking I can say in PowerShell - hey, go to that machine, scavenge IIS app for me (and create an image from it), and it will be done with one line in PS console:

 ConvertTo-Dockerfile -RemotePath \\192.168.0.1\c$ -OtputPath c:\myDockerFile -Artifact IIS

How cool is that! ๐Ÿ˜Š This PowerShell module can be found in the PS Gallery.

Regarding containers support, Azure Service Fabric now supports Windows Server Containers and Hyper-V isolation, Image deployment and activation, volume driver, networking and DNS discovery and resources governance. Kubernetes is now having Alpha support for Windows containers now (good demonstration from RedHat to see, where OpenShift runs Windows containers). It supports one container per Pod (linked containers collection that shares an IP address). The easiest way to start with Kubernetes is to use ACS. Please follow the link to documentation.

Nano Server Image. For those who don’t know what this is – it’s container-optimized Windows Server 2016 image with uncompressed size around 1Gb. What’s interesting about it, it lacks Windows components that are irrelevant in containers or for modern development. Optional components are layers now, so, can be installed on top of the image.

Comprehensive documentation on Windows Containers.

And lastly, don’t hesitate to use MS Feedback Hub and user voice portals to get back to Microsoft teams with comments or feature requests.

 

Microsoft Graph

I cannot avoid mentioning Microsoft Graph (formerly known as Graph API) as right now it penetrates every MS product, provides native ways of managing your application through REST APIs. You can use the Microsoft Graph API to interact with the data of millions of users in the Microsoft cloud. Use Microsoft Graph to build apps for organizations and consumers that connect to a wealth of resources, relationships, and intelligence, all through a single endpoint: https://graph.microsoft.com

What can you do with it? Potentially – many things, starting from MS Office management (calendars, alerting, meeting requests), OneDrive, and ending with Skype and Azure AD management (users/groups/passwords, subscriptions), which are the most precious features as per my opinion.

One of my questions to Microsoft engineers was about extremely fast changes to some APIs and how I could keep track of them. Now you can use a changelog on MS website to be sure you won’t miss anything important. Keep an eye on it to be always up to date. Try a sample request in the Graph Explorer

 

Bots

Microsoft Bot Framework is another technology that has a lot of buzz around it. Bots, AI and Machine Learning, are the new tendency in the software development and step forward towards self-contained AI, so everybody interested in it (the room on the conference were overcrowded as well). You can Build and connect intelligent bots to interact with your users naturally wherever they are — from your website or app to Cortana, Skype, Teams, Office 365 mail, Slack, Facebook Messenger Skype for Business and more!

The most interesting features from my perspective are Adaptive Cards and Payment Request APIs along with voice/language recognition capabilities.

I have attended a very interesting presentation by Mat Velloso and Ryan Volum (both from DX group – Developer Experience) that reveals development patterns and best practices in this regard. I highly recommend to watch it yourself, as there are lots of cool stuff going on ๐Ÿ˜Š Obviously, MS is adopting this technology itself, grow the expertise in this field and collecting best practices and the guys share some really cool hands-on experience of implementing Bots for several customers and Bots Framework Architecture. Please find the link at the end of this topic.

Bots framework documentation.

 

Azure Cosmos DB

Talking about this conference, I couldn’t avoid mentioning Cosmos DB, which is the new Microsoft’s globally distributed database service designed to enable you to build planet-scale applications. On this session, Rimma Nehme (Cosmos DB Team Architect) explained how to start leveraging Cosmos DB for applications and described some of its differentiating features (multi-model (Key-Value, Document & Graph Database), APIs (Document DB, MongoDB, Tables, Gremlin Graph)). Also, it was demonstrated how easy it is to port over existing code and data from popular open source NoSQL Databases. Azure Cosmos DB was built from the ground up with global distribution and horizontal scale at its core. It offers turnkey global distribution across any number of Azure regions by transparently scaling and replicating your data wherever your users are.

Important highlights about this DB:

  • Cosmos DB natively partitions your data for high availability and scalability. Cosmos DB offers 99.99% guarantees for availability, throughput, low latency, and consistency
  • Cosmos DB has SSD backed storage with low-latency order-of-millisecond response times
  • Cosmos DB's support for consistency levels like eventual, consistent prefix, session, and bounded-staleness allows for full flexibility and low cost-to performance-ratio. No database service offers as much flexibility as Cosmos DB in levels consistency
  • Cosmos DB has a flexible data-friendly pricing model that meters storage and throughput independently
  • Cosmos DB's reserved throughput model allows you to think regarding number of reads/writes instead of CPU/memory/IOPS of the underlying hardware
  • Cosmos DB's design lets you scale to massive request volumes in the order of trillions of requests per day
  • Throughput: 100s to 100s of million requests/sec
  • Multi-homing APIs (Apps don’t need to be redeployed during regional failover)
  • Automatic multi-region replication (dynamically adjusted)
  • Storage: Gigabytes to Petabytes
  • Guarantee millisecond latency worldwide

See the most popular HL Architectures and use cases on the official website along with the recorded video on Channel 9 (the link is below).

There is more!

There are even more conferences I was unable to attend myself as the rooms were overcrowded or they were running in parallels, and I was able to see only part of it, but the topics are deserving talking about them:

  • The future of C# All about recently shipped C# 7.0 and Visual Studio 2017
  • Signal R in .NET Core – in my opinion, the guys have produced one of the most interesting and entertaining sessions on this Build. It was all about new SignalR capabilities on .NET Core platform
  • Machine Learning for Developers – how to build even more intelligent apps and services (all of Microsoft’s offerings such as Azure Machine Learning, SQL Server R Services, Data Science Virtual Machine, Cognitive Services and Cognitive Toolkit, and Azure Data Lake Analytics)

 

Microsoft Partner Solutions (or, “Know your toolset”)

Along with Microsoft, the lounge zone at Conference Centre was occupied by the partners. I had an opportunity to talk to all of them and get the recent news about their products. The most interesting ones are next:

  • RedHat OPENSHIFT

Openshift represents container platform (or more precisely, container orchestration based on Kubernetes). It supports multiple languages, frameworks, and databases and allows hybrid and multi-cloud deployment

  • LaunchDarkly

Represents a scalable feature management platform that wraps new functionality in code, separating deployment from feature release. It helps companies perform canary launches while incorporating kill-switches to turn off poorly performing features

  • Octopus Deploy - Octopus works with your build server to enable reliable, secure, automated releases of ASP.NET applications and Windows Services into test, staging and production environments, whether they are in the cloud or on-premises. In my opinion, the most interesting capability of this platform is simplified deployment process to VMs (that is missing and so desirable in VSTS)
  • JetBrains – lots of different offerings. I like ReSharper and always use it in development which simplifies routine work a lot. Apart from it, the guys were presenting Rider IDE (Visual Studio for Mac competitor) which is an early preview now. In my opinion, right now it has a lot more capabilities than VS for Mac (I have both on my laptop)
  • Redgate – SQL Server tools for developers and DB DevOps. Surprisingly, some of their tools (like SQL Prompt) are now the part of VS 2017, which definitely the good news.

As you may see, a lot of stuff is dedicated to containers, and this wide choice of options gives an extraordinary flexibility. As Corey correctly pointed out, it is up to your choice which approaches to use. If you have a preconceived notion or favorite platform – you can deploy and start playing around! You have all the necessary instruments in your hands!

 

 Mentioned Videos on Channel 9 

 

Azure Compute – new features and roadmap (Corey Sanders)

Microservices Architecture with ASP.NET Core (Cesar De la Torre)

How to build serverless business applications with Azure Functions and Logic Apps for PowerApps (Jeff Hollan, Eduardo Laureano)

Developing on Windows Server: Innovation for today and tomorrow - containers, Docker, .NET Core, Service Fabric

How to build global-scale applications with Microsoft Azure SQL Database

Bot capabilities, patterns and principles (Mat Velloso and Ryan Volum)

Azure Cosmos DB – Microsoft’s globally-distributed, multi-model database service

 

I hope describing all this stuff worth efforts and you can learn something new from these topics. Happy reading!

What I learned at Microsoft Build 2017 - Part 1

 

17 May 2017
asdasd

What I learned at Microsoft Build 2017 – Part 1

In this topic, I want to share what I learned at MS Build and explain the key takeaways from this event. I won’t be focusing on the keynotes much, as you can watch the relevant videos on channel 9, but the key message, as Microsoft’s VP Scott Guthrie said – “The success of your solution on the Azure platform is our primary goal!”. That means that Microsoft keeps concentrating its efforts around Azure services… but hold on, there is much more than that!

It is rare when I so excited about the conference. I was surprised and even baffled by some new releases, and news Microsoft has thoroughly prepared to present at this conference. Let’s do everything one by one, so I can make sure I didn’t forget anything important. So, first and foremost - MS finally released long-awaited Visual Studio for Mac (release date is 8th of May 2017). Finally, the developers received a familiar instrument for solutions development on Mac, using NET Core and Xamarin (or Unity), but the tooling, comparing to the same IDE for Windows still requires some work. Especially disappointing is the absence of Docker support.

Continuing topic on development instruments, Microsoft announces a new Cloud Shell in Azure (command line) that works with your Azure account right from the browser. Every session is in sync with a $Home directory that is stored in Azure, which, in turn, gives a possibility to access files, VM’s and other artifacts you deal with via UI. PowerShell is not yet supported and coming soon, although, you can sign up for a private preview here. The App for iOS is also available in the App Store. I gave it five stars, as it pretty neat and quick (command line is not yet available though), but it was demonstrated at the conference by Corey Sanders (see “Azure Compute – new features and roadmap” on Channel 9).

Infrastructure

According to Scott Hanselmann, MS has announced 38 regions/data centers around the globe (and nine countries where disaster recovery is possible because of multiple regions: USA, Canada, UK, France, Germany, India, China, Japan, Australia). Cross-Region Disaster Recovery (Automatic DR to another area, i.e. Azure to Azure), so, we can finally take advantage of multiple regions within the country. This feature is coming soon.

So, to summarize, we going to have In-Country DR (9 countries as mentioned above), Multi-Instance SLA (99.95%), Single-Instance SLA (99.99%). In comparison to other clouds, Microsoft is far ahead.

Instance metadata service is another new feature on the Azure Platform that exposes an endpoint, so you can manage upcoming maintenance events, bootstrap VM with identity and provide VM context in a fully programmatic way. It also provides the status of all the instances running.

Containers

There were lots of topics at the conference pertaining to containers overall and their interaction with OS components and Azure services in particular. Microsoft Azure Container Service provides Docker tooling and API support along with provisioning of Kubernetes, DC/OS, and Docker Swarm. I have inserted the links for those who don’t know what this is all about. In short, these are all the container orchestration, deployment automation, and scale tools, based on Mesos DC/OS, Kubernetes or Docker Swarm that provide the best Docker experience. Apart from this, Microsoft on-boards Deis team for assistance on Kubernetes (and Helm tool in particular, that simplifies pre-configured Kubernetes resources management). See the demo by Corey Sanders (I have provided the most interesting links below).

Service Fabric

Another interesting topic is Azure Service Fabric. There is nothing new in Azure-based Microservices, but the fact you can now deploy them in containers (surprise-surprise ). Azure Service Fabric now represents stateful and stateless services (.NET and Java APIs on Windows Server and Linux) that can be deployed in Azure, Azure Stack, On-premises, in OpenStack and AWS even. The application (Music Store) shown by Corey on this presentation can be found on GitHub, along with explanations on how to run it in Docker for Windows. 

Azure Batch

This service has been around for a while. It offers Job Scheduling as a service at the first place, and the other platform offerings have become very easy having Azure Batch service. It allows you to focus on the major question, i.e. what infrastructure you want, when/where and pricing. Microsoft has announced a low priority batch VMs where you pay 80% less and, sure thing, it is deeply integrated with the job scheduling (right now is available in preview mode). Another aspect of a batch is being able to take advantage of rendering capabilities (using compute instances to render images/videos/3D models). Senior Product Manager from Autodesk has shared his experience in this regard.

Serverless

Function side of the house J For those who don’t know what this is about, it is an Event-based way of programming that doesn’t require infrastructure (and has a billing model by usage/demand/call/execution, etc.) Microsoft’s offering here is Azure Functions (I highly recommend to watch a recorded presentation by Eduardo Laureano, PM on Azure Functions and will talk about all this stuff a bit later). Microsoft has announced the integration with Visual Studio on that day (on 8th of May a lot of stuff were announced), so, it simplifies editing and debugging of the functions. A new feature that pertains to this set of technologies is Azure Functions Runtime. What’s cool about it, is that it allows to install runtime and use Functions in a container outside Azure environment. You can download a preview version of it from here. It can run anywhere where containers work or on top of the Azure Stack (which is in turn, the Azure functionality outside Azure environment).

Azure Managed Applications and Service Catalog

Package and seal the Azure functionality for the other folks to use. It means that the application which is being developed can be packaged and deployed by someone else, using ARM experience, or, sold/exposed to a third-party, etc. What is interesting about it – the catalog uses an enhanced security model, so the app in the catalog can be exposed or pre-deployed to a certain personality/account (after approval or without it) and has a configurable amount of parameters that are required for successful deployment. Right now it’s in preview mode (just like many other things in this topic), and link to documentation leads to a general Azure website (I was unable to try it myself due to the missing documentation).

Compute

Microsoft has expanded the set of compute instance sizes (by adding six new ones):

  • F – Compute intensive
  • NC – NVIDIA GPUs K80 compute
  • NV – NVIDIA GPUs M60 Visualization
  • H – Fastest CPU IB Connectivity (very high throughput between instances)
  • L – Large SSDs
  • SAP – SAP Large instances

Four more were announced during the conference:

  • ND (P40) – NVIDIA Gear towards deep learning, scale-out computation
  • NCv2 (P100s) – computational SKU (scale-out compute with InfiniBand connectivity)
  • Dv3 (SSD storage, fast CPU) and Ev3 (high memory) – next generation of existing SKUs with nested virtualization (meaning that they’ll be based on Windows 2016 and support VMs inside VMs deployed into the cloud)

Azure Logic Apps

Azure Logic App is not a new service, but obviously known not well enough. It takes advantage of existing BizTalk servers (connects to logic apps through an adapter) to connect to SaaS and invoke logic apps. It makes it easier to connect to trading partners using EDI standards and B2B capabilities.

Microsoft PowerApps

This service is around for a while, and I know people who heavily use it in development.  There is nothing new in the way it works (aside from the grooving number of different adaptors, which is 140 now) but it allows to build an app very quickly and doesn’t require much of mobile development experience, which was perfectly demonstrated by the guys (Jeff Hollan, Eduardo Laureano) on the very last day of the conference. I think you will enjoy the presentation as much as I did. Please watch “How to build serverless business applications with Azure Functions and Logic Apps for PowerApps” to get more info on the topic (the link to Channel 9 video below).

There is still a lot to tell about, so, I had to split my topic into two parts. I will publish the second one shortly. Stay tuned :)

 

 Mentioned Videos on Channel 9 

Azure Compute – new features and roadmap (Corey Sanders)

How to build serverless business applications with Azure Functions and Logic Apps for PowerApps (Jeff Hollan, Eduardo Laureano)