AI-Driven Stack Overflow Bot from Microsoft: First look

bot7

I guess there would be very few developers who do not use StackOverflow in their day to day life. StackOverflow is part of a developer’s life.

Whenever we have some issues or doubts we go to the browser -> opens StackOverflow or search for the question on the browser -> StackOverflow link opens and then we can clear our doubts or issues.

Now imagine you have an active bot in your Visual Studio Code and as soon as you have any issues, just ask that to the bot without leaving Visual Studio Code.

Sounds interesting right?

It is possible with AI driven StackOverflow bots as Microsoft has teamed up with Stack Overflow to create a bot that will answer your programming questions from inside the Visual Studio code editor.

Let us see what are the things required to run the bot.

Requirement:

  • Node 8.1.4 or more
  • StackBot directory

Steps to run:

  • Run npm install in StackBot directory
  • Run npm start in StackBot directory
  • Navigate to http://localhost:portNumber/ to interact with bot

Please note that as this bot uses a number of services(including Bing Custom Search, LUIS, QnA Maker, and Text Analytics), you will need to create applications and generate keys for each one. Microsoft has created a GitHub page with the necessary details which can help you to guide on how to do this.

For this article, we will concentrate more on Visual Studio Code capability to run StackOverflow bot.

Configuration of the bot in Visual Studio Code

As I explained earlier, Visual Studio Code allows developers to quickly call the bot using some simple commands.

Steps:

bot8

  • From the Bot dashboard, add ‘Direct Line’ channel which will communicate with your bot’s backend service

bot9

  • Add site with appropriate name which will then redirect to the page where you can generate the tokens

bot10

Once you add a site:

bot11

  • Click on Show to view the keys:

bot12

  • Copy the tokens and go back to Visual Studio Code
  • Open user settings and add the new field named StackCode.directLineToken, assign the token you copied earlier into this field
  • If everything is done correctly, a pan would be opened which is the interactive window, nothing but the bot

Microsoft has given the bot demo in Ignite conference last month where they showed how powerful the bot is.

Let us see some examples of the bot:

Whenever you want to get help from StackOverflow, just start the StackOverflow bot:

bot1

Which will open the StackOverflow bot:

bot2

Now you can just write down your question in the text and bot will give the answers:

bot3

It can even help you if you need the code.

For example, you are in a need to convert Name+Surname into Surname, First Initial of your name.

Just ask this to the bot:

bot4

And the bot will reply the code for you with the code:

bot5

It can even read the image you uploaded into the bot.

For example, you have an exception, you take the screenshot of the exception and just upload that image into the bot:

bot6

It is really mind-blowing.

 

 

Advertisements

Microsoft Cognitive Services for AI : Vision API

microsoft-cognitive

Recently I took part into a Hackathon in which we were required to submit some innovative ideas for a well-known bank.

I registered and after few days I got an email from the Hackathon event team that they have arranged some webinars to help people to think about some innovative ideas.

I got impressed with the agenda of the webinar which included below points:

  • Microsoft Vision API
  • Microsoft Speech API
  • Microsoft Language API
  • Microsoft Knowledge API
  • Microsoft Search API

This was the first time I got to know about Microsoft Cognitive Services and when I learned more about this, I got to know that Microsoft Cognitive Services are very powerful.

Let us first see what is Microsoft Cognitive Services?

Microsoft Cognitive Services (formerly Project Oxford) are a set of APIs, SDKs and services available to developers to make their applications more intelligent, engaging and discoverable. Microsoft Cognitive Services expands on Microsoft’s evolving portfolio of machine learning APIs and enables developers to easily add intelligent features – such as emotion and video detection; facial, speech and vision recognition; and speech and language understanding – into their applications. Our vision is for more personal computing experiences and enhanced productivity aided by systems that increasingly can see, hear, speak, understand and even begin to reason.

It has basically 5 main features:

  • Vision
  • Knowledge
  • Language
  • Search
  • Speech

ai1

Let us see how Vision API works

Follow below steps which are required:

Also if you want to have Bot Application as a template then as a workaround just download this project and put the extracted folder into below location:

C:\Users\YourName\Documents\Visual Studio 2015\Templates\ProjectTemplates\Visual C#

Once this is done, you can see Bot Application template as shown below:

ai2

Click on Bot Application and then it will create a sample project which has the structure as below:

ai3

Here MessagesController is created by default and it is the main entry point of the application.

MessagesController will call the service which will handle the interaction with the Microsoft APIs. Replace the code into MessagesController with below code:

using System;
using System.Linq;
using System.Net;
using System.Net.Http;
using System.Threading.Tasks;
using System.Web.Http;
using System.Web.Http.Description;
using Microsoft.Bot.Connector;
using Newtonsoft.Json;
using NeelTestApplication.Vision;

namespace NeelTestApplication
{
    [BotAuthentication]
    public class MessagesController : ApiController
    {
        public IImageRecognition imageRecognition;

        public MessagesController()  {
            imageRecognition = new IImageRecognition();
        }

        ///
        /// POST: api/Messages
        /// Receive a message from a user and reply to it
        ///
        public async Task<HttpResponseMessage> Post([FromBody]Activity activity)
        {

            ConnectorClient connector = new ConnectorClient(new Uri(activity.ServiceUrl));

            if (activity.Type == ActivityTypes.Message)
            {

                var analysisResult =await imageRecognition.AnalizeImage(activity);
                Activity reply = activity.CreateReply("Did you upload an image? I'm more of a visual person. " +
                                      "Try sending me an image or an image url"); //default reply

                if (analysisResult != null)
                {
                    string imageCaption = analysisResult.Description.Captions[0].Text;
                    reply = activity.CreateReply("I think it's " + imageCaption);
                }
                await connector.Conversations.ReplyToActivityAsync(reply);
                return new HttpResponseMessage(HttpStatusCode.Accepted);
            }
            else
            {
                HandleSystemMessage(activity);
            }
            var response = Request.CreateResponse(HttpStatusCode.OK);
            return response;
        }

        private Activity HandleSystemMessage(Activity message)
        {

            if (message.Type == ActivityTypes.DeleteUserData)
            {
                // Implement user deletion here
                // If we handle user deletion, return a real message
            }
            else if (message.Type == ActivityTypes.ConversationUpdate)
            {
                // Handle conversation state changes, like members being added and removed
                // Use Activity.MembersAdded and Activity.MembersRemoved and Activity.Action for info
                // Not available in all channels
            }
            else if (message.Type == ActivityTypes.ContactRelationUpdate)
            {
                // Handle add/remove from contact lists
                // Activity.From + Activity.Action represent what happened
            }
            else if (message.Type == ActivityTypes.Typing)
            {
                // Handle knowing tha the user is typing
            }
            else if (message.Type == ActivityTypes.Ping)
            {
            }

            return null;
        }
    }
}

In above code, you can find an interface called IImageRecognition. This interface includes the methods which will interact with the Microsoft APIs.

So now we will add an interface IImageRecognition and replace the code with below code:

using Microsoft.Bot.Connector;
using Microsoft.ProjectOxford.Vision;
using Microsoft.ProjectOxford.Vision.Contract;
using System.Threading.Tasks;

namespace NeelTestApplication.Vision
{
    public interface IImageRecognition
    {
        Task<AnalysisResult> AnalizeImage(Activity activity);    
    }
}

Once this is done, let us add ImageRecognition class which will inherit from IImageRecognition:

using Microsoft.Bot.Connector;
using Microsoft.ProjectOxford.Vision;
using Microsoft.ProjectOxford.Vision.Contract;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using System.Web;

namespace NeelTestApplication.Vision
{
    public class ImageRecognition : IImageRecognition
    {
        private   VisualFeature[] visualFeatures = new VisualFeature[] {
                                        VisualFeature.Adult, //recognize adult content
                                        VisualFeature.Categories, //recognize image features
                                        VisualFeature.Description //generate image caption
                                        };

        private VisionServiceClient visionClient = new VisionServiceClient(" https://www.microsoft.com/cognitive-services/en-us/sign-up");

        public async Task<AnalysisResult> AnalizeImage(Activity activity)  {
            //If the user uploaded an image, read it, and send it to the Vision API
            if (activity.Attachments.Any() && activity.Attachments.First().ContentType.Contains("image"))
            {
                //stores image url (parsed from attachment or message)
                string uploadedImageUrl = activity.Attachments.First().ContentUrl; ;
                uploadedImageUrl = HttpUtility.UrlDecode(uploadedImageUrl.Substring(uploadedImageUrl.IndexOf("file=") + 5));

                using (Stream imageFileStream = File.OpenRead(uploadedImageUrl))
                {
                    try
                    {
                        return  await this.visionClient.AnalyzeImageAsync(imageFileStream, visualFeatures);
                    }
                    catch (Exception e)
                    {
                           return null; //on error, reset analysis result to null
                    }
                }
            }
            //Else, if the user did not upload an image, determine if the message contains a url, and send it to the Vision API
            else
            {
                try
                {
                   return await visionClient.AnalyzeImageAsync(activity.Text, visualFeatures);
                }
                catch (Exception e)
                {
                   return null; //on error, reset analysis result to null
                }
            }
        }
    }
}

Note that you will be required to add an API key which you can get from the Cognitive Service page of Azure here.

ImageRecognition class has an important method named AnalizeImage which basically reads the image from the location and transfers it into the stream. Then it calls below API method and passes the image stream:

this.visionClient.AnalyzeImageAsync(imageFileStream, visualFeatures);

Above method will return AnalysisResult which can be extracted as below:

var imageCaption = analysisResult.Description.Captions[0].Text

So basically Image caption is the text it will return after analyzing the image.

Let us try this out.

If we want to test our bots locally then Bot emulator is the best option.

The Bot Framework Emulator is a desktop application that allows bot developers to test and debug their bots on localhost or running remotely through a tunnel.

As we mentioned on top of the post, you can download the Bot emulator from here.

The only important thing it requires is the URL of your API. For example in our case it would be:

http://localhost:PortNumber/api/messages

Now when we upload the image on Bot emulator, it will give the result as below:

ai4

It is awesome. Hope it helps.

Angular with .Net Core 2.0

ang12

When we think of creating a JavaScript application, for example, an Angular project, generally we do not think of Visual Studio because we are not used to writing Angular code on Visual Studio.

But Microsoft team has allowed us to write Angular code in Visual Studio and it works very well with the back-end code of .Net.

Let us see how to create the Angular application using new templates which have been introduced with .Net Core 2.0. For more information have a look here.

First of all click on File -> New -> Project. It will open below window:

ang1

Then click on .Net Core Web application, it will open below window:

core3

Click on the Angular template which will create a brand new project of Angular in which:

  • Views of MVC will be replaced by Angular
  • We still have Models and Controllers
  • There is no Razor, so if you are big fan of Razor then Angular is not the approach you should go
  • There is now ClientApp folder where JavaScript framework components are held

The structure of the project would look like as below:

ang2

As you can see we have Controllers here but currently we do not have any models but they can be added when we plug the database with the application.

Let us look at how the Views look like, For that, we will open Index.cshtml which looks like below:

ang3

This is just the bootstrapper of Angular and we are not going to write any views or Angular code into this folder for the current project.

Now let us look at the actual client side of the application where we can find all the TypeScript.

What is TypeScript?

TypeScript is a free and open-source programming language developed and maintained by Microsoft. It is a strict syntactical superset of JavaScript and adds optional static typing to the language. Anders Hejlsberg, the lead architect of C# and creator of Delphi and Turbo Pascal, has worked on the development of typescript.

For our example we have created the CounterComponent which is used to increment the counter once we click on a button and FetchDataComponent which fetches the data from the API and shows it in the view as shown below:

ang4

Let us look at CounterComponent first

In Counter.component.ts file we will write below code:

ang5

Here you can see the Angular code but it is written in the TypeScript. Like we have currentCount variable in above component which will be bound in the respected HTML file as below:

ang6

Once the code is written, just click on IIS Express button on top of the page and it will run the Angular application:

So CurrentCount will be increased once we click on Increment button:

ang7

Now let us look at FetchDataComponent.

Here in FetchData.Component.ts file we can write the code to call the API which will return the data as shown below:

ang8

The API endpoint resides in the Controller folder as shown below:

ang9

WeatherForecasts method returns the json data and then Angular processes that, repopulate the web page with the appropriate HTML as shown below:

ang10

Important Notes:

When we run the application, TypeScript would be running in the background. So if you make any changes in the code, it would be reflected automatically in the browser.

It is possible because Microsoft team has created a Node services so that:

  • We can run from within C# code a JavaScript library which is running in Node
  • We can call out Node.js and call function in Javascript
  • We can return the value in C# code which we can utilize.

For example, we can kickoff WebPack from C# code so whenever we click on Save button:

  • There is a watch in the background which picks up the save
  • It recompiles that Javascript
  • Node.js is running in Background which recreates the View
  • Sends back the view to the browser

Hope it helps.

 

 

 

 

Visual Studio Code Tools for Artificial Intelligence(AI): First look

ai10

Microsoft recently announced Visual studio code tools for AI which is an extension to build, test, and deploy Deep Learning / AI solutions.

We all know that AI is really in demand nowadays. Let us first see what is AI:

Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans. Some of the activities computers with artificial intelligence are designed for include: Speech recognition. Learning.

What could be done with help of Visual studio code tools for AI?

  • It integrates with Azure Machine Learning for robust experimentation capabilities
  • Can be used for submitting data preparation
  • Can be used for model training jobs transparently to different compute targets
  • Provides support for custom metrics
  • To run history tracking
  • To enable data science reproducibility and auditing

It can be clubbed with deep learning frameworks like  Microsoft Cognitive Toolkit CNTK, TensorFlow, Theano, Keras, Caffe2 and many other frameworks

Let us go step by step:

Basic requirement:

Once you have Visual studio Code installed, just open Visual studio code tools for AI from the Extensions as below:

ai1

Please note that it can be installed only if you have Visual Studio code version 1.16.1 or more else it will show below error:

Couldn’t find a compatible version of Visual Studio Code Tools for AI with this version of Code.

Once the extension is downloaded you can see below screen:

ai3

Just click on Reload Window and it will then load the landing page of the extension.

To Explore Sample projects:

Note: To play with some sample projects, you first need to have the Azure Machine Learning Workbench installed. You could follow this to install the Azure Machine learning Workbench as below:

Install Azure Machine Learning Workbench on Windows:

Install the Azure Machine Learning Workbench on your computer running Windows 10, Windows Server 2016, or newer.

  1. Download the latest Azure Machine Learning Workbench installer AmlWorkbenchSetup.msi.
  2. Double-click the downloaded installer AmlWorkbenchSetup.msi from your File Explorer.
  3. Finish the installation by following the on-screen instructions.

     

    ai4

    The installer downloads all the necessary dependent components such as Python, Miniconda, and other related libraries. The installation may take around half an hour to finish all the components.

  4. Azure Machine Learning Workbench is now installed in the following directory:

    ai5C:\Users\<user>\AppData\Local\AmlWorkbench

Once it is done, you can follow below steps to look at some sample projects:

  1. Open the command palette from View tab (View > Command Palette or Ctrl+Shift+P).
  2. Enter “ML Sample” in the search box.
  3. You get a recommendation for “AI: Open Azure ML Samples Explorer”, select it and press enter:

ai6

Let us take fist Iris example:

Click on Install and give the name you want to give:

ai7

Then give folder name and click on Enter. It will create the project in your Visual Studio code.

Background for Classifying Iris project:

The purpose of this example is to demonstrate how to use a feature selection technique not available for Azure ML experiments.

This is a companion sample project of the Iris tutorial that you can find from the main GitHub documentation site. Using the timeless Iris flower dataset, it walks you through the basics.

Let us just submit the job to train the model locally:

For that, Open iris_sklearn.py class and then right click and select AI: Submit Job:

ai8

You can even view the running jobs by clicking on Command Palette and then search for AI: List Jobs. It will show all running jobs.

Hope it helps.

 

 

 

 

C# 7.0 feature Part I : Pattern matching

pattern

In this series of posts, I will explain the new features of C# 7.0

Let us see the Pattern Matching feature in the current post.

There are currently 2 existing language constructs which has been enhanced with patterns by Microsoft team:

  1. with the keyword is
  2. With Switch case

Before starting let us see some advantages of Pattern matching:

  • To match patterns on any data type, even on custom data types
  • Built-in pattern matching
  • Pattern matching can extract values from your expression

Okay so let us see the switch case first.

Assume we have a class called Customer which has been implemented by 2 classes, Agent and DirectCosumer as shown below:

 class Customer
 {
    public int CustomerId { get; set; }
    public string Name { get; set; }
    public string City { get; set; }
 }

 class Agent : Customer
 {}

class DirectConsumer : Customer
 {}

With C# 7.0 pattern matching feature, we can:

  • write additional conditions in case statements
  • switch on any type
  • use patterns in case statement

So we can write switch case statements as below:

 switch(customer)
 {
   case Agent a when (a.CustomerId == 11):
   Console.WriteLine($"Customer is an agent and Name: {a.Name}");
   break;

   case DirectConsumer b when ((b.CustomerId == 21) & (b.City == "Pune"):
   Console.WriteLine($"Customer is a consumer(Pune location) and Name: {b.Name}");
   break;

   default:
   Console.WriteLine("Customer Not found");
   break;

   case Null:
   throw new ArgumentNullException(nameof(shape));
 }

There are few points which are very important:

  • Now the sequence of the case statement matters as the first which satisfies the condition will be processed just like catch clauses
  • Even though you put default first before null case, it will first check whether it fits with null and if it does not then only it goes for default case so default will be executed at the last
  • If customer object is null, processing will fall into the Null case, even if it is an Agent or DirectConsumer null instance

Now let us see keyword is:

You might be wondering that is keyword is there in C# since the beginning then why it is called as a new feature?

Well till now we could use is keyword either to check if a specified interface is implemented, or if the type of the object derives from a base class.

From C# 7.0 onwards, we can use is with:

  • Type pattern:

With type pattern, we can check whether the object is compatible or not:

  1. if (a is Agent p) Console.WriteLine($"it's an agent: {p.Name}");
  2. if (d is DirectConsumer b && (b.City == "Pune")) Console.WriteLine($"it's a direct cosumer lives in {b.City}");
  • Const pattern:

It can be used to verify any constant numbers or Null:

  1. if (a is null) throw new ArgumentNullException(nameof(a));
  2. if (a is 10) Console.WriteLine("it is 10");

 

Let us take some example to check whether the object is a string:

object obj = "Hello, World!";
if (obj is string str)
{
 Console.WriteLine(str);
}
One more example where we want to check whether an object is equal to some constants:
object obj = 1; 

if (obj is 1)
{
  Console.WriteLine(true);
}

Hope it helps 🙂

 

 

 

C# 8.0 Expected Features Part – II : Implementation of method in the Interface

C8

In my previous post, I have explained about one of the first four features of C# 8.0 which has been announced by Microsoft recently.

In this post, I will explain one more future feature which is Default Interface Methods.

This feature will allow interfaces to fully define methods just like abstract classes. However,  interfaces will still not be able to declare constructors or fields.

Let us see what it is:

Let us take one example of an interface, like IMessage and it has one method named Message as shown below:

interface IMessage {
   void Message();
}

And a class will implement it as below:

Class Test : IMessage
{
}

Now imagine we want to add a new method to the interface, it will result in lots of errors in all the classes which have inherited IMessage interface.

Here comes the new feature which changes some basic functionalities of C# language.

We will create a method with the implementation in the interface. Yes, you read it right, we will be able to write the implementation of the method into the interface as below:

interface IMessage {
    void Message();

    void MessageAll(IEnumberable<MyClass> myClass){ 
       foreach(var i in myClass)
       {
              //// Your code
       }
   }
}

So no need to write newly added method into all the classes which inherit IMessage interface.

It is cool, isn’t it?

One basic question would come in the mind is, what is the difference between this and the abstract class?

They are bit similar but there are some major differences.

One of the difference is:

  • The interface can be used for multiple inheritances but Abstract class can not be used
  • Interfaces will still not be able to declare constructors or fields.

Some useful points for Default Methods feature:

  • “Full abstraction” where we used to have plain interfaces and all classes were forced to have its own implementation. Now we are polluting classes with some default implementation for methods which they don’t even know those exist.
  • It also breaks interface segregation principle, if we have to change an interface in future and it should not break all the classes but some of the classes need to have the new method added means, the interface is not granular enough and it’s time to introduce new interface and implement for the required classes.

Java announced almost similar feature with 8.0 version earlier and it is becoming so famous among Java developers and it will soon become famous between .Net developers as well 🙂

 

Internet Of Things(IoT) with Node-Red and Azure: Part 1

node14

In my upcoming series of posts, I will explain how to create IOT hub in Azure, how to create an IOT device using the Azure code and how to integrate our IOT Azure with Node-Red.

First of all, let us see what is IOT?

As per Wikipedia:

The Internet of things (IoT) is the network of physical devices, vehicles, and other items embedded with electronics, software, sensors, actuators, and network connectivity which enable these objects to collect and exchange data.

what is Node red?
Node-RED is a programming tool for wiring together hardware devices, APIs and online services in new and interesting ways.

It provides a browser-based editor that makes it easy to wire together flows using the wide range of nodes in the palette that can be deployed to its runtime in a single click.

It is basically used for Internet of things(IOT) as it is a flow-based programming.

node12

It is browser based flow editing and it is built on Node.js

In this post, I will explain how to create IoT Hub in Azure.

Let us go step by step:

Make sure you have an account in Azure. You can create a free account which has the validity for a month.

First of all, we will create IOT Hub in Azure:

node4

Give some useful name and click on Create:

NODE5

It will take some time and once the process is complete, it will show below message:

node6

Once it is done, you can search newly create IOT hub from resource groups as below:

node8

Click on Shared Access policy and then click on iothubowner to get the connection string of your IOT:

node10

Save the connection string as shown below:

node11

You have just created your first IOT Hub.

In my next post, I will show how to create your IOT device(We will use Azure’s sample code for that) and then will integrate it with Node-red.