Unity + LUIS + JSON


It works! I managed to get LUIS integration working with Unity’s new Unity Web Request object and a JSON package from SaladLab’s Github. Voice recognition still needs work though.
All the code is here: Github/KatVHarris
LUIS post is here
Microsoft Bot Framework post is here

Direct Integration of Microsoft Bot Framework

My first attempt to integrate the Microsoft Bot Framework was not met with much success. I wanted to integrate the Microsoft Bot Framework by building out the Universal Windows Platform (UWP) project and installing the package directly into the Visual Studio project. There has been some success with this through the UWP tag:


The tag creates a UWP thread that would spin off and complete .NET specific tasks. The difficult part about this is that, in order to work with the Hololens developers must use the beta version of the Unity Engine (Unity 4.5b22 in my case); which uses version 10 of the Universal App Platform (UAP) and isn’t compatible with many libraries yet.

So when I built out my Hololens application I was met with several errors, the most prevalent one was:
Microsoft.Bot.Builder 1.2.5 is not compatible with UAP,Version=v10.0.

** If you found a way to get around the compatibility issue with UAP Version 10.0 let me know **

Direct LUIS integration

Because the Bot Framework library wasn’t integrating I decided to call my LUIS endpoint directly. To do this I needed 3 things:

  1. Web calls for Unity
  2. JSON unity Package and integration
  3. Speech to Text

When Unity5 was released they updated there Http protocols from the simple WWW object to UnityWebRequest manager. Calling the endpoint of Luis was simple enough because the Publish feature provides the sample call to your LUIS project.


In Unity I then created a simple Input Field that would trigger the GetText() method when the user hit enter.

   IEnumerator GetText()
        UnityWebRequest www = UnityWebRequest.Get(requestString+requestText);
        yield return www.Send();

        if (www.isError)
            // Show results as text

Once the call to the endpoint was working, the next thing I did was try and parse the data. JSON is a pain with Unity since Unity is built on a old version of .Net and morphed form Mono. Luckily there is a work around thanks to SaladLab’s unity plugin which is based off Newtonsoft JSON library (basically THE JSON library for .Net). What’s great about the plugin is that it’s open source on their Github.

After I imported their JSON libraries I imported:

  • using Newtonsoft.Json
  • using Newtonsoft.Json.Linq

The LINQ library was able to read in the string and convert it into a usable JSON object that we can parse through.

luisValue = www.downloadHandler.text;
luisReturnQuery = JObject.Parse(luisValue);
string luisIntent = luisReturnQuery.SelectToken("intents[0].intent").ToString(); //the accurate intent

The only part about this method is that you need to know the structure of the LUIS JSON object you are getting back in order to parse correctly.

Generally LUIS objects are structured like this:

  "intents": [
      "score": 0.994736,
      "actions": [
          "triggered": true,
          "name": "INTENT NAME",
          "parameters": [
              "name": "PARAMETER 1",
              "required": true,
              "value": [
                  "entity": "ENTITY NAME",
                  "type": "ENTITY TYPE",
                  "score": 0.9995919 //SCORE ACCURACY
      "intent": "SECOND PLACE INTENT",
      "score": 0.261976063
      "entity": "ENTITY NAME",
      "type": "ENTITY TYPE",
      "startIndex": Character index,
      "endIndex": End Character Index,
      "score": 0.9995919

With both parts integrated into my Unity Project I was able to ping the endpoint and successfully call and identify different Intents.

Next Steps

As of 07/07/16 I began looking into Hololens/UAP/Cognitive services Speech to Text API. Currently there is a listener to trigger certain commands in Hololens, but those are one word/phrase solutions. This means exact wording is necessary. However, We want a more robust natural language processing to interact with the users.

Next week will be the endeavor of getting all the voice stuff working.

Happy Hacking!

– TheNappingKat

Leave a Comment

Your email address will not be published. Required fields are marked *