Microsoft Cognitive Services are basically a set of APIs, SDKs and services available for developers. Using MCS developers can make their applications more intelligent, user engaging. Microsoft Cognitive Services are more associated with an expansion of Microsoft Azure Machine Learning APIs by which developers can easily enable rich and intelligent features in applications like understanding user emotion and video detection, face detection, speech, vision recognition and speech and language understanding into developer applications. Microsoft’s vision with Microsoft Cognitive Services is to provide more personal and rich computing experiences.

Available APIs

API’s available under Microsoft Cognitive Services is below:

Available API’s

Academic Knowledge API

Bing Autosuggest API

Bing Search APIs (Image/News/Video/Web)

Bing Speech API

Bing Spell Check API

Computer Vision API

Emotion API

Entity Linking API

Face API

Knowledge Exploration Service

Linguistic Analysis API


Recommendations API

Speaker Recognition API

Text Analytics API

Video API

Web Language Model API

Academic Knowledge API
– In this API we will be able to interpret user queries for academic intent and retrieve rich information from the Microsoft Academic Graph (MAG). The MAG knowledge base is a web-scale heterogeneous entity graph comprised of entities that model scholarly activities: field of study, author, institution, paper, venue, and event.  This API will contain fresh information from the Web following discovery and indexing by Bing. Based on this dataset, the Academic Knowledge APIs enables a knowledge-driven, interactive dialog that seamlessly combines reactive search with proactive suggestion experiences, rich research paper graph search results, and histogram distributions of the attribute values for a set of papers and related entities. It will be very helpful to researchers community of the world.

The Bing Autosuggest API – If we send a partial search query to Bing and it will get back a list of suggested queries that other users have searched on that same search topic. In addition to including searches that other users have made, the list may include suggestions based on user intent. For example, if the query string is "weather in Ahmedabad", the list will include relevant weather suggestions as well.

The Bing Image Search API – It provides a similar (but not exact) experience to The Image Search API lets if we send a search query to Bing and get back a list of relevant images.

The Bing News Search API – It provides a similar experience to The Bing News Search API lets if we send a search query to Bing and get back a list of relevant news articles.

Bing Speech API - This API is a cloud-based API that provides advanced algorithms to process spoken language. With this API, developers can add speech driven actions to their applications including real-time interaction with the user as well. Microsoft's cross-platform REST API enables speech capabilities on all internet-connected devices. Every major platform including Android, iOS, Windows, and 3rd party IoT devices are supported. Microsoft REST API offers industry-leading speech-to-text, text-to-speech, and language understanding capabilities delivered through the cloud. Microsoft has used Bing Speech API for Windows applications like Cortana and Skype Translator as well as Android applications like Bing Torque for Android Wear and Android Phone.

Bing Spell Check API -  which performs contextual spelling checking for any text and provides inline suggestions for misspelled and wrong spelling words.

Bing Video Search API – It provides a similar (but not exact) experience to The Bing Video Search API lets if we send a search query to Bing and get back a list of relevant videos.

Microsoft Computer Vision API - This cloud-based API provides developers with access to advanced algorithms for processing images and returning information. By uploading an image or specifying an image URL, Microsoft Computer Vision algorithms can analyze and understand visual content in different ways based on inputs and user choices. It is very helpful for the area like Image Processing, Image Understanding now a days.

Microsoft Emotion API - which allows us to build more personalized apps with Microsoft’s cutting edge cloud-based emotion recognition algorithm.

Microsoft Entity Linking Intelligence Service – It is a web service created to help developers with tasks relating to entity linking. Sometimes in different contexts, a word might be used as a named entity, a verb, or other word form within a given sentence. For example, in the case where “times” is a named entity, it still may refer to two separately distinguishable entities, such as “The New York Times” or “Times Square”. Given a specific paragraph within a document, the Entity Linking Intelligence Service will recognize and identify each separate entity based on its context.

Microsoft Face API – It is a cloud-based service that provides the most advanced face algorithms. Face API has two main functions: face detection with attributes and face recognition. Face API detects up to 64 human faces with high precision face location in an image. And the image can be specified by file in bytes or valid URL. Face rectangle (left, top, width and height) indicating the face location in the image is returned along with each detected face. Optionally, face detection extracts a series of face related attributes such as pose, gender, age, head pose, facial hair and glasses etc.

Face recognition is widely used in many scenarios including security, natural user interface, image content analysis and management, mobile apps, and robotics. Four face recognition functions are provided: face verification, finding similar faces, face grouping, and person identification. Face API verification performs an authentication against two detected faces or authentication from one detected face to one person object.

Also, this API helps to find Similar Face, Face Grouping and Face Identification.

LUIS API - Language Understanding Intelligent Services (LUIS) brings the power of machine learning to our apps. LUIS is designed to enable us to quickly deploy an HTTP endpoint that will take the sentences we  send it and interpret them in terms of the intention they convey and the key entities that are present. By using the LUIS web interface, we can custom design a set of intentions and entities that are relevant to our application, then let LUIS guide us through the process of building a language understanding system.

One of the key problems in human-computer interactions is the ability of the computer to understand what a person wants to say and want to do, and to find the pieces of information that are relevant to their intent. For example, in a travel agent app, we might say "Book me a ticket to Ahmedabad" in which case there is the intention to "BookTicket" while "Ahmedabad" is the location entity. Intention can be defined as the desired action and usually contains a verb, in this case "book", and the entity is the topic/subject, in this case "Ahmedabad", of the action.

Text Analytics - Understanding and analyzing unstructured text is an increasingly popular field and includes a wide spectrum of problems such as sentiment analysis, key phrase extraction, topic modeling/extraction, aspect extraction and many more.

Text Analytics API is a suite of text analytics services built with Azure Machine Learning. Microsoft currently offers APIs for sentiment analysis, key phrase extraction and topic detection for English text, as well as language detection for 120 languages.

Microsoft Video API - Video API is a cloud-based API that provides advanced algorithms for tracking faces, detecting motion, stabilizing and creating thumbnails from video. This API allows us to build more personalized and intelligent apps by understanding and automatically transforming our video content.

See Also

Another important place to find an extensive amount of Cortana Intelligence Suite related articles is the TechNet Wiki itself. The best entry point is Cortana Intelligence Suite Resources on the TechNet Wiki.