First look of Azure Machine Learning : Azure Machine Learning part II

ml5

In my last post, I have explained very basic information for Machine learning and I also explained the development life cycle for a Machine learning project.

In this post, I will explain some frequent issues during the Machine Learning development and how you can overcome using Azure Machine Learning along with some basic Data cleansing task using Azure Machine Learning.

One of the biggest problems with Machine learning development:

In the Machine learning workflow, there is, sometimes, friction in the hand over between Data scientist and Operations.

The models which are developed are often recorded which causes translation errors:

ml8

Thus Data scientist loses visibility in the model performance due to that.

How Azure Machine learning can solve this big problem?

With Azure Machine learning, the workflow is dramatically enhanced because it enables the operations engineer to encapsulate the model instead of recording that, which reduces the noise in the system:

ml9

Additionally, Azure Machine learning provides the capability to make the experiments more efficient by reducing the time to prepare the data as well as well as by simplifying the experimentation valuation

What are the components of Azure Machine learning?

It is an Azure service which consists of libraries like Microsoft ML Spark libraries and tools like Azure Workbench and these work together with the IDEs like Visual Studio Code, PyCharm, Jupyter etc and third-party libraries like TensorFlow, TLC, CNTK etc.

You can train as well as deploy using Docker on Azure Compute such as HD Insight, VMs, GPUs, Azure container services as well as IOT devices.

Below is the complete picture of the things I explained above:

ml10

Apart from this, Azure Machine Learning can help Data scientists as below:

  • You can reuse some existing Python, R scripts
  • Easy configuration for modeling and deployment
  • Easy to use graphical interface
  • No need to setup anything, it is ready to start and no more computing resource limitations
  • Azure marketplace to utilize existing models or publish/monetize your new models
  • Built-in Algorithms:

Azure-Machine-Learning-Models

Can it help developers as well?

Yes, it does:

  • Very helpful existing ML APIs which you can use
  • Can easily use ML models in day to day applications
  • It brings prediction capabilities to the masses and available to non-experts
  • Predictive models can be used to interpret the huge data that would result from the Internet Of Things(IOT)

How to get into Azure Machine Learning?

To get started with Studio, go to https://studio.azureml.net. If you’ve signed into Machine Learning Studio before, click Sign In. Otherwise, click Sign up here and choose between free and paid options.

Sign in to Machine Learning Studio
Now let us take a quick example of Data cleansing of some large data in Azure Machine learning.

For example, our task is to find whether the image has snow leopard or not. There are a bunch of images and from these images, we are required to find which are those images which have snow leopard in it.

One of the examples of those images is as below:

ml11

To load the data:

Create a new experiment by clicking +NEW at the bottom of the Machine Learning Studio window, select EXPERIMENT, and then select Blank Experiment:

ml21

Now, we have the bunch of images metadata which contains the long Image path along with a couple of timestamps, we will load them into our Azure:

ml12

As you can see in above image, all those images have some unique image number. So our first task is to take out those unique numbers from those long path.

With Azure Machine Learning, it can be done by just a few clicks.

We will use Derive column by Example feature for this.

ml13

Now just give the image number for only a couple of images and the system will learn and will perform the task for rest of the path on its own. It is one type of Supervised learning:

ml14

Once the process is done:

ml15

As you can see, it learned to take out the image number from rest of the images even though some of the images have parenthesis.

Now let us see another example of cleansing the data task.

In the above metadata, the images path are categorized into mainly 2 folders, one with otherImages and another with snowLeopardImages:

ml16

So we will give 0 to otherImages and 1 to snowLeopardImages with use of Derived by example again:

ml17

ml18

It may require giving 0 and 1 more than once(around 5 times max) because by giving more examples, Machine is learning that it should put 0 against all otherImages and 1 against all snowLeopardImages

Once the process is over, we can see the count of 0 and 1 by clicking on Value count as shown below:

ml19

So below window shows, we have 2864 images without Snow Leopard and around 800 images with Snow Leopard Images:

ml20

By very few clicks, we can do data cleaning tasks.

Microsoft research has many pre-existing libraries but we can use other open source and third party libraries.

In next post, we will see how we can integrate Python code into Azure Machine Learning to improve the accuracy and the deployment of the same Snow Leopard model.

Hope it helps.

 

 

 

Advertisements

My idea got selected in SBI Hackathon

sbi

This post is not related to any technical topic. Just wanted to share good news here.

I applied for SBI(State bank of India) hackathon(#D4B2017) a few days back and good news is, my idea has been shortlisted 🙂

My idea is based on Voice authentication and going to submit the project by 15th November.

Let us hope they will like the demo 🙂

Machine Learning in simple words: Azure Machine Learning part I

ml6

Nowadays Machine learning is a very hot topic, everyone is talking about Machine learning and discussing how it can be useful in their business or in his or her career.

Machine Learning in simple words

Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that machines should be able to learn and adapt through experience.

Types of Machine Learning

Supervised Learning
  • When there is pre-defined dataset to train your program
  • Based on its training data the program can make accurate decisions when given new data
  • So it is like learning with the teacher
  • It is like Classification and regression
  • For example, you receive bunch of flowers with labels and your program can indention the flowers on basis of the labeling

ml32

Unsupervised Learning
  • When there is no teacher to train
  • When your program is smart enough to automatically find patterns and relationships in the database which is without labeling.
  • In this learning, you didn’t use any past/prior knowledge about people and classified them “on-the-go”
  • It is like clustering and association
  • For example, you receive flowers without labeling so the program needs the algorithm to identify the flowers

ml33

Reinforcement learning
  • It is just like hit and trial kind of learning
  • The program learns from their own experience.
  • A software program that performs a defined task optimally and learns by trial and error through the experience.

ml34

Where it is being used

Many big industries have already started implementing Machine learning for their business.

For example, I recently participated in a well-known bank hackathon where the themes of the hackathon were mainly on Machine learning and AI.

One of the examples is, Mobile Check Deposits – Take a picture of your filled cheque and upload it to your account. No need to physically visit the bank and wait for the cheque to be deposited in your account. It saves time and easier to use. Also can be used for fraud detection.

This is just one example, but there are many other examples:

  • Self-driving cars
  • Fraud preventions techniques
  • Air traffic controls
  • Uber uses Machine learning to make Uber more powerful
  • Social networks like Facebook uses machine learning, for example when you upload an image it automatically suggests whom you should tag in the picture
  • Pinterest can recommend similar pins from the image you uploaded
  • Snapchat introduced facial filters, called Lenses. These filters track facial movements, allowing users to add animated effects or digital masks that adjust when their faces moved
  • Online shopping, the suggestion comes from the user’s previous interest
  • Smart personal assistance like Alexa, Cortana, Siri and lot more

At this moment, there are many sensors and other things which are collecting the data which they will use for their Machine Learning projects.

What’s required to create good machine learning systems?

  • Data preparation capabilities
  • Algorithms – basic and advanced
  • Automation and iterative processes
  • Scalability
  • Ensemble modeling
  • Easy and frequent deployments

Machine learning project Lifecycle

It basically contains 3 teams working together:

ml1

First Data scientist acquires and transforms the data building a deep understanding which allows them to build a model:

ml2

Once the model is chosen, Operational Engineer deploys it and setups monitoring and management in the production environment:

ml3

And programmatic access to this deployed model are embedded in code by the Developers converting them into the API which can be accessed from outer world:

ml4

These APIs can be accessed from the outer world.

For example, Microsoft Cognitive services have an open Vision API. Have a look here if you require more information on this.

In my next post, I will explain some frequent issues during the Machine Learning development and how you can overcome using Azure Machine Learning. (* Update – The post is here)

Hope it helps.

ErrorCode = ‘0x80004005 : 80008083: .Net Core 2.0 + IIS exception

We all know that .Net Core has been announced so people started implementing with .Net core.

One exception which people are getting while frequently while deploying .Net Core application(created by Visual Studio 2017) on IIS is as below:

Application ‘<IIS path>’ with physical root ‘<Application path>’ failed to start process with commandline ‘”dotnet” .\MyApp.dll’, ErrorCode = ‘0x80004005 : 80008083.

Reason of the exception?

This exception comes when runtime required is not deployed on the server and web application was lately moved to Visual Studio 2017. Because VS2017 RC is shipped with the new version of .NET Core SDK and your server has some other .net version than Core.

Meaning of error code:

  • 0x80008083 – code for version conflict.
  • 0x80004005 – file is missing or cannot be accessed

So the error means a different version of dotnet needs to be installed on the server.

Solution:

.Net Core 1.0 needs to be installed on the server.

Steps are as below:

  • Stop IIS
  • Install .Net core 1.0 on the server which you can find here.
  • Start IIS again

Above error will not come after this.

Note:

I have explained for .Net Core 1.0 but it depends on the version. For example, if you are deploying .Net Core 2.0 then need to install the sdk accordingly.

Hope it helps.

 

 

Newly written SignalR with .Net Core 2.0 and TypeScript

signalr

Whenever we want to implement real-time web functionality in .Net then first thing comes to our mind is SignalR.

What is SignalR?

ASP.NET SignalR is a library for ASP.NET developers that makes developing real-time web functionality easy. SignalR allows bi-directional communication between server and client. Servers can now push content to connected clients instantly as it becomes available. SignalR supports Web Sockets, and falls back to other compatible techniques for older browsers. SignalR includes APIs for connection management (for instance, connect and disconnect events), grouping connections, and authorization.

So basically SignalR can be used for pub-sub, Client-Server applications.

For example, we can implement chat with the use of SignalR. For this, client first subscribes to the Hub and once it is subscribed, Hub pushes the updates as soon as the updates are available.

Microsoft has announced .net core 2.0 in last August which did not have support for SignalR but recently SignalR has been rewritten to support .Net Core. The new SignalR is simpler, more reliable, and easier to use.

Important Note:

Please note that new SignalR is not compatible with previous versions of SignalR. It means that we cannot use the old server with the new clients or the old clients with the new server.

TypeScript Client

Till now the SignalR client was mostly depended upon the jQuery but with the newer version of SignalR has Javascript client which is written in TypeScript. It can be used from Node.js as well.

Let us create a sample client which we will use to invoke methods from the server(Sample of Hub server can be found at the end of the current post)

For Installation:

npm install @aspnet/signalr-client

Setup:

First of all, you need to add below dependencies in package.json file to use the client in Node.js application:

"dependencies": {
  "atob": "^2.0.3",
  "btoa": "^1.1.2",
  "eventsource": "^1.0.5",
  "msgpack5": "^3.5.1",
  "websocket": "^1.0.24",
  "xmlhttprequest": "^1.8.0"
},

For Example, you want to make an application which reads stock rates as soon as the market is opened then first we will create a new hub connection in stocks.ts typescript file:

let hubConnection = new signalR.HubConnection("http://localhost:8080/stocks");
Then we will write streamStocks method which will collect the stock data by calling StreamStocks method of the hub:
function streamStocks (): void {
     hubConnection.stream("StreamStocks").subscribe({
      next: (stock : any) => {
            console.log(stock);
     },
    error: () => {},
       complete: () => {}
     });
}
Then we will invoke the method streamStocks if the market is open and will invoke OpenMarket method if the market is not opened yet:
hubConnection.start().then(() => {
   hubConnection.invoke("GetMarketState").then(function (state : string): void {
   if (state === "Open") {
          streamStocks();
    } else {
            hubConnection.invoke("OpenMarket");
      }
  });
}).catch(err => {
console.log(err);
});
We can subscribe to marketOpened event which will then call streamStocks once the market is opened as below:
hubConnection.on("marketOpened", function(): void {
streamStocks();
});

Streaming

It is now possible to stream data from the server to the client. Unlike a regular Hub method invocation, streaming means the server is able to send results to the client before the invocation completes.

Not more than one Hub per connection

New SignalR does not support multiple hubs per connection also it is now not required to subscribe Hub methods before the connection starts.

SignalR Server example

As we have created the client, now let us create the hub.

First of all, create an empty web application from Visual Studio 2017 templates:

core3

Then we need to add SignalR reference which you can add from Package console manager as shown below:

Install-Package Microsoft.AspNetCore.SignalR -Pre

which will add the SignalR reference into the project as below:

<ItemGroup>
   <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0" />
   <PackageReference Include="Microsoft.AspNetCore.SignalR" Version="1.0.0-alpha2-final" />
</ItemGroup>

Now create a hub class for Stocks which can communicate with all subscribed clients

using Microsoft.AspNetCore.SignalR; 

namespace SignalRApplication 
{   
 public class StockHub: Hub   
 { 
 private readonly Stock _stock; 

 public StockHub(Stock stock) 
 { 
 _stock = stock; 
 } 

 ////To take the advantage of streaming you need to create a hub method that returns either a ReadableChannel<T> or an IObservable<T> as shown in above code
 public IObservable<Stock> StreamStocks() 
 { 
 return _stock.StreamStocks(); 
 } 

 public string GetMarketState() 
 { 
 return _stock.MarketState.ToString(); 
 }     

 public async Task OpenMarket() 
 { 
 await _stock.OpenMarket(); 
 } 

 public async Task CloseMarket() 
 { 
 await _stock.CloseMarket(); 
 }   
 } 
}

Hub has the capability to connect with all the clients which have been subscribed to the hub

Note: Here Stock service contains all important implementation of Open\Close market methods but we will not go into the details of implementation in the current post.

After adding a Hub class you need to configure the server to pass requests sent to the stock hub to SignalR:

public class Startup 
 { 
 public void ConfigureServices(IServiceCollection services) 
 { 
 services.AddSignalR(); 
 services.AddScoped<StockHub>(); 
 services.AddSingleton<Stock>(); 
 } 

 public void Configure(IApplicationBuilder app) 
 { 
 app.UseFileServer(); 
 app.UseSignalR(routes => { routes.MapHub<StockHub>("stocks"); }); 
 } 
}

Once the set up of the server is done, you can invoke hub methods from the client(which we have created in the starting of this post) and receive invocations from the server.

So once we run the client:

  • Connection with the hub will be started and GetMarketState will be called which will return the current state of market
  • If the market state is opened then StreamStocks will be called which will return the stock data in stream. To take the advantage of streaming you need to create a hub method that returns either a ReadableChannel<T> or an IObservable<T> as shown in above code
  • If the market is not opened, market will be first opened and then StreamStocks will be called

Whenever we run the application, it will get the real time stock data as below:

signalr2

Hope it helps.

SourceTree with Bitbucket cloud

repo15

In this article, I will explain how to do all Git operations from SourceTree which will be reflected in BitBucket.

Let us see what is BitBucket:

Bitbucket is a web-based hosting service that is owned by Atlassian. It is the Git solution for professional teams. Bitbucket core features include pull requests, branch permissions, and inline comments.

What is SourceTree?

Sourcetree simplifies how you interact with your Git repositories so you can focus on coding. Visualize and manage your repositories through Sourcetree’s simple Git GUI.

Below are the steps for different Git operations which are done in SourceTree.

  • Create repository in BitBucket by clicking on + icon
  • Give name of the repository along with the project name and then click on Create repository

repo1

Clone Repository

  • Once the repository is created, you can get the URL of the repo which can be used to communicate from commands or SourceTree

repo2

  • We can clone this repo by providing the URL into the SourceTree. It will then get the remote repository into our local folders:

repo3

Git bash command:

git clone https://NeelAtlas@bitbucket.org/everestatlas/repositorydemo.git C:\Users\E074368\Documents\repositorydemo

Push Changes

  • Place the project you want to push into the folder where the repository is cloned. As soon as you put the project into the folder, SourceTree will pick the changes into its UI:

repo4

  • Click on Stage All and then click on Commit, Check Push Changes immediately It will commit and push the changes to the remote repository:

repo5

GitBash commands:

git commit  -m

git push origin master

  • It should now be available on BitBucket under Source tab

Pull commands

  • Create a sample file in BitBucket to check the Pull command. Once the file is added in BitBucket, click on Pull which will pull all latest changes from BitBucket into your local folder

repo6

Git Commands:

git fetch origin

git  pull origin master

Branch Creation

  • Click on Branch button in SourceTree and give appropriate name:

repo7

Git Commands:

git  branch DemoRepositoryBranch1

git  checkout DemoRepositoryBranch1

  • Branch has been created and checked out. Go to the physical folder where actual code is stored and make changes in any file
  • As soon as the changes are made in the file, it would be reflected in SourceTree as below:

repo8

  • Stage All and then Commit + Push as we did earlier
  • To merge the chages from branch into master, we first need to check out the master branch. For that just double click on master branch and then click on merge

Git commands:

git  checkout master

  • Select the changeset you want to merge and also check the checkboxes as shown below:

repo9

Git command:

Git merge DemoRepositoryBranch1

  • Once the branch is merged it will show a couple of changes needs to be pushed:

repo10

  • Push the changes as we did earlier. It will push master changes into the remote repository
  • In BitBucket, you can now see the changes made in branch would be merged into the main master branch

Delete branch

  • Click on Branches tab and then click on Delete Branches, delete for local and remote both:

repo11

Git Commands:

git  branch -d DemoRepositoryBranch1

git branch -d -r origin/DemoRepositoryBranch1

git push origin :refs/heads/DemoRepositoryBranch1

Rebase branch

  • Rebase is somewhat similar to merge but in rebase, all the changes will be ignored from master branch which is made after the branch is created. Have a look here for more information.
  • Create a new branch and repeat the steps we did above, just do not merge this time. We will do rebase instead of merge
  • First, check out master branch by double-clicking on master and then Right click on the branch and select the option as shown below:

repo12

  • Once the rebase is done, there will be an item added into push. Click on push to push the changes into remote repository

Git Commands:

git rebase DemoRepositoryBranch2

Rollback

  • In SourceTree, right click on the commit you want to roll back and then click on revert commit:

repo14

It will rollback the changes from that commit but make sure that you do not have any uncommitted changes while you do rollback

Git Commands:

git revert commitId

Hope it helps.

AI-Driven Stack Overflow Bot from Microsoft: First look

 

bot7I guess there would be very few developers who do not use StackOverflow in their day to day life. StackOverflow is part of a developer’s life.

Whenever we have some issues or doubts we go to the browser -> opens StackOverflow or search for the question on the browser -> StackOverflow link opens and then we can clear our doubts or issues.

Now imagine you have an active bot in your Visual Studio Code and as soon as you have any issues, just ask that to the bot without leaving Visual Studio Code.

Sounds interesting right?

It is possible with AI driven StackOverflow bots as Microsoft has teamed up with Stack Overflow to create a bot that will answer your programming questions from inside the Visual Studio code editor.

Let us see what are the things required to run the bot.

Requirement:

  • Node 8.1.4 or more
  • StackBot directory

Steps to run:

  • Run npm install in StackBot directory
  • Run npm start in StackBot directory
  • Navigate to http://localhost:portNumber/ to interact with bot

Please note that as this bot uses a number of services(including Bing Custom Search, LUIS, QnA Maker, and Text Analytics), you will need to create applications and generate keys for each one. Microsoft has created a GitHub page with the necessary details which can help you to guide on how to do this.

For this article, we will concentrate more on Visual Studio Code capability to run StackOverflow bot.

Configuration of the bot in Visual Studio Code

As I explained earlier, Visual Studio Code allows developers to quickly call the bot using some simple commands.

Steps:

bot8

  • From the Bot dashboard, add ‘Direct Line’ channel which will communicate with your bot’s backend service

bot9

  • Add site with appropriate name which will then redirect to the page where you can generate the tokens

bot10

Once you add a site:

bot11

  • Click on Show to view the keys:

bot12

  • Copy the tokens and go back to Visual Studio Code
  • Open user settings and add the new field named StackCode.directLineToken, assign the token you copied earlier into this field
  • If everything is done correctly, a pan would be opened which is the interactive window, nothing but the bot

Microsoft has given the bot demo in Ignite conference last month where they showed how powerful the bot is.

Let us see some examples of the bot:

Whenever you want to get help from StackOverflow, just start the StackOverflow bot:

bot1

Which will open the StackOverflow bot:

bot2

Now you can just write down your question in the text and bot will give the answers:

bot3

It can even help you if you need the code.

For example, you are in a need to convert Name+Surname into Surname, First Initial of your name.

Just ask this to the bot:

bot4

And the bot will reply the code for you with the code:

bot5

It can even read the image you uploaded into the bot.

For example, you have an exception, you take the screenshot of the exception and just upload that image into the bot:

bot6

It is really mind-blowing.

 

 

Microsoft Cognitive Services for AI : Vision API

microsoft-cognitive

Recently I took part into a Hackathon in which we were required to submit some innovative ideas for a well-known bank.

I registered and after few days I got an email from the Hackathon event team that they have arranged some webinars to help people to think about some innovative ideas.

I got impressed with the agenda of the webinar which included below points:

  • Microsoft Vision API
  • Microsoft Speech API
  • Microsoft Language API
  • Microsoft Knowledge API
  • Microsoft Search API

This was the first time I got to know about Microsoft Cognitive Services and when I learned more about this, I got to know that Microsoft Cognitive Services are very powerful.

Let us first see what is Microsoft Cognitive Services?

Microsoft Cognitive Services (formerly Project Oxford) are a set of APIs, SDKs and services available to developers to make their applications more intelligent, engaging and discoverable. Microsoft Cognitive Services expands on Microsoft’s evolving portfolio of machine learning APIs and enables developers to easily add intelligent features – such as emotion and video detection; facial, speech and vision recognition; and speech and language understanding – into their applications. Our vision is for more personal computing experiences and enhanced productivity aided by systems that increasingly can see, hear, speak, understand and even begin to reason.

It has basically 5 main features:

  • Vision
  • Knowledge
  • Language
  • Search
  • Speech

ai1

Let us see how Vision API works

Follow below steps which are required:

Also if you want to have Bot Application as a template then as a workaround just download this project and put the extracted folder into below location:

C:\Users\YourName\Documents\Visual Studio 2015\Templates\ProjectTemplates\Visual C#

Once this is done, you can see Bot Application template as shown below:

ai2

Click on Bot Application and then it will create a sample project which has the structure as below:

ai3

Here MessagesController is created by default and it is the main entry point of the application.

MessagesController will call the service which will handle the interaction with the Microsoft APIs. Replace the code into MessagesController with below code:

using System;
using System.Linq;
using System.Net;
using System.Net.Http;
using System.Threading.Tasks;
using System.Web.Http;
using System.Web.Http.Description;
using Microsoft.Bot.Connector;
using Newtonsoft.Json;
using NeelTestApplication.Vision;

namespace NeelTestApplication
{
    [BotAuthentication]
    public class MessagesController : ApiController
    {
        public IImageRecognition imageRecognition;

        public MessagesController()  {
            imageRecognition = new IImageRecognition();
        }

        ///
        /// POST: api/Messages
        /// Receive a message from a user and reply to it
        ///
        public async Task<HttpResponseMessage> Post([FromBody]Activity activity)
        {

            ConnectorClient connector = new ConnectorClient(new Uri(activity.ServiceUrl));

            if (activity.Type == ActivityTypes.Message)
            {

                var analysisResult =await imageRecognition.AnalizeImage(activity);
                Activity reply = activity.CreateReply("Did you upload an image? I'm more of a visual person. " +
                                      "Try sending me an image or an image url"); //default reply

                if (analysisResult != null)
                {
                    string imageCaption = analysisResult.Description.Captions[0].Text;
                    reply = activity.CreateReply("I think it's " + imageCaption);
                }
                await connector.Conversations.ReplyToActivityAsync(reply);
                return new HttpResponseMessage(HttpStatusCode.Accepted);
            }
            else
            {
                HandleSystemMessage(activity);
            }
            var response = Request.CreateResponse(HttpStatusCode.OK);
            return response;
        }

        private Activity HandleSystemMessage(Activity message)
        {

            if (message.Type == ActivityTypes.DeleteUserData)
            {
                // Implement user deletion here
                // If we handle user deletion, return a real message
            }
            else if (message.Type == ActivityTypes.ConversationUpdate)
            {
                // Handle conversation state changes, like members being added and removed
                // Use Activity.MembersAdded and Activity.MembersRemoved and Activity.Action for info
                // Not available in all channels
            }
            else if (message.Type == ActivityTypes.ContactRelationUpdate)
            {
                // Handle add/remove from contact lists
                // Activity.From + Activity.Action represent what happened
            }
            else if (message.Type == ActivityTypes.Typing)
            {
                // Handle knowing tha the user is typing
            }
            else if (message.Type == ActivityTypes.Ping)
            {
            }

            return null;
        }
    }
}

In above code, you can find an interface called IImageRecognition. This interface includes the methods which will interact with the Microsoft APIs.

So now we will add an interface IImageRecognition and replace the code with below code:

using Microsoft.Bot.Connector;
using Microsoft.ProjectOxford.Vision;
using Microsoft.ProjectOxford.Vision.Contract;
using System.Threading.Tasks;

namespace NeelTestApplication.Vision
{
    public interface IImageRecognition
    {
        Task<AnalysisResult> AnalizeImage(Activity activity);    
    }
}

Once this is done, let us add ImageRecognition class which will inherit from IImageRecognition:

using Microsoft.Bot.Connector;
using Microsoft.ProjectOxford.Vision;
using Microsoft.ProjectOxford.Vision.Contract;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using System.Web;

namespace NeelTestApplication.Vision
{
    public class ImageRecognition : IImageRecognition
    {
        private   VisualFeature[] visualFeatures = new VisualFeature[] {
                                        VisualFeature.Adult, //recognize adult content
                                        VisualFeature.Categories, //recognize image features
                                        VisualFeature.Description //generate image caption
                                        };

        private VisionServiceClient visionClient = new VisionServiceClient(" https://www.microsoft.com/cognitive-services/en-us/sign-up");

        public async Task<AnalysisResult> AnalizeImage(Activity activity)  {
            //If the user uploaded an image, read it, and send it to the Vision API
            if (activity.Attachments.Any() && activity.Attachments.First().ContentType.Contains("image"))
            {
                //stores image url (parsed from attachment or message)
                string uploadedImageUrl = activity.Attachments.First().ContentUrl; ;
                uploadedImageUrl = HttpUtility.UrlDecode(uploadedImageUrl.Substring(uploadedImageUrl.IndexOf("file=") + 5));

                using (Stream imageFileStream = File.OpenRead(uploadedImageUrl))
                {
                    try
                    {
                        return  await this.visionClient.AnalyzeImageAsync(imageFileStream, visualFeatures);
                    }
                    catch (Exception e)
                    {
                           return null; //on error, reset analysis result to null
                    }
                }
            }
            //Else, if the user did not upload an image, determine if the message contains a url, and send it to the Vision API
            else
            {
                try
                {
                   return await visionClient.AnalyzeImageAsync(activity.Text, visualFeatures);
                }
                catch (Exception e)
                {
                   return null; //on error, reset analysis result to null
                }
            }
        }
    }
}

Note that you will be required to add an API key which you can get from the Cognitive Service page of Azure here.

ImageRecognition class has an important method named AnalizeImage which basically reads the image from the location and transfers it into the stream. Then it calls below API method and passes the image stream:

this.visionClient.AnalyzeImageAsync(imageFileStream, visualFeatures);

Above method will return AnalysisResult which can be extracted as below:

var imageCaption = analysisResult.Description.Captions[0].Text

So basically Image caption is the text it will return after analyzing the image.

Let us try this out.

If we want to test our bots locally then Bot emulator is the best option.

The Bot Framework Emulator is a desktop application that allows bot developers to test and debug their bots on localhost or running remotely through a tunnel.

As we mentioned on top of the post, you can download the Bot emulator from here.

The only important thing it requires is the URL of your API. For example in our case it would be:

http://localhost:PortNumber/api/messages

Now when we upload the image on Bot emulator, it will give the result as below:

ai4

It is awesome. Hope it helps.

Angular with .Net Core 2.0

ang12

When we think of creating a JavaScript application, for example, an Angular project, generally we do not think of Visual Studio because we are not used to writing Angular code on Visual Studio.

But Microsoft team has allowed us to write Angular code in Visual Studio and it works very well with the back-end code of .Net.

Let us see how to create the Angular application using new templates which have been introduced with .Net Core 2.0. For more information have a look here.

First of all click on File -> New -> Project. It will open below window:

ang1

Then click on .Net Core Web application, it will open below window:

core3

Click on the Angular template which will create a brand new project of Angular in which:

  • Views of MVC will be replaced by Angular
  • We still have Models and Controllers
  • There is no Razor, so if you are big fan of Razor then Angular is not the approach you should go
  • There is now ClientApp folder where JavaScript framework components are held

The structure of the project would look like as below:

ang2

As you can see we have Controllers here but currently we do not have any models but they can be added when we plug the database with the application.

Let us look at how the Views look like, For that, we will open Index.cshtml which looks like below:

ang3

This is just the bootstrapper of Angular and we are not going to write any views or Angular code into this folder for the current project.

Now let us look at the actual client side of the application where we can find all the TypeScript.

What is TypeScript?

TypeScript is a free and open-source programming language developed and maintained by Microsoft. It is a strict syntactical superset of JavaScript and adds optional static typing to the language. Anders Hejlsberg, the lead architect of C# and creator of Delphi and Turbo Pascal, has worked on the development of typescript.

For our example we have created the CounterComponent which is used to increment the counter once we click on a button and FetchDataComponent which fetches the data from the API and shows it in the view as shown below:

ang4

Let us look at CounterComponent first

In Counter.component.ts file we will write below code:

ang5

Here you can see the Angular code but it is written in the TypeScript. Like we have currentCount variable in above component which will be bound in the respected HTML file as below:

ang6

Once the code is written, just click on IIS Express button on top of the page and it will run the Angular application:

So CurrentCount will be increased once we click on Increment button:

ang7

Now let us look at FetchDataComponent.

Here in FetchData.Component.ts file we can write the code to call the API which will return the data as shown below:

ang8

The API endpoint resides in the Controller folder as shown below:

ang9

WeatherForecasts method returns the json data and then Angular processes that, repopulate the web page with the appropriate HTML as shown below:

ang10

Important Notes:

When we run the application, TypeScript would be running in the background. So if you make any changes in the code, it would be reflected automatically in the browser.

It is possible because Microsoft team has created a Node services so that:

  • We can run from within C# code a JavaScript library which is running in Node
  • We can call out Node.js and call function in Javascript
  • We can return the value in C# code which we can utilize.

For example, we can kickoff WebPack from C# code so whenever we click on Save button:

  • There is a watch in the background which picks up the save
  • It recompiles that Javascript
  • Node.js is running in Background which recreates the View
  • Sends back the view to the browser

Hope it helps.

 

 

 

 

Create a free website or blog at WordPress.com.

Up ↑