April 7, 2023 No Comments

Integration of Dialogflow API

To access Dialogflow CX via API, first we need to have a setup of Gcloud CLI in our system. If you do not have the setup of Gcloud, please follow the given link https://cloud.google.com/sdk/docs/install

After gcloud setup follows these steps to give authentication permission.

  • Select a project with given command gcloud config set project <ProjectID>
  • Now run the following command gcloud auth application-default login
  • You will be prompted with a login page. “You are now authenticated with the gcloud CLI!” this will be displayed once you have successfully logged in.

Then, we are using the function given in the Dialogflow API official documentation. The function used to detect intent will take project id, session id, texts, and language code as arguments.

				
					from google.cloud.dialogflowcx_v3.services.sessions import SessionsClient
from google.cloud.dialogflowcx_v3.types import session
from google.protobuf.json_format import MessageToDict


def detect_intent_disabled_webhook(project_id, location, session_id, agent_id, text, language_code):
   client_options = None
   if location != "global":
       api_endpoint = f"{location}-dialogflow.googleapis.com:443"
       print(f"API Endpoint: {api_endpoint}\n")
       client_options = {"api_endpoint": api_endpoint}
   session_client = SessionsClient(client_options=client_options)
  
   session_path = session_client.session_path(project=project_id,location=location,agent=agent_id,session=session_id)

   # Prepare request
   text_input = session.TextInput(text=text)
   query_input = session.QueryInput(text=text_input, language_code=language_code)
   #If there is a webhook running in the background, you can disable it by setting 'disable_webhook' to False instead of True.
   query_params = session.QueryParameters(disable_webhook = True)
   request = session.DetectIntentRequest(session=session_path,query_input=query_input,query_params=query_params)
   response = session_client.detect_intent(request=request)
   # print(response)
   response_dict = MessageToDict(response._pb)
   print(response_dict)
  
project_id = "Your Project ID"
location_id = "Your Location ID"
agent_id = "Your Agent ID"
session_id = "test_1"
text = "Hello"
language_code = "en-us"
detect_intent_disabled_webhook(project_id, location_id, session_id, agent_id, text, language_code)
				
			

Where the project id is the bot id from Dialogflow. We can define the session id to keep the session between Dialogflow and the user alive. The query text will be entered as “text”, and the “language_code” will be the code for the language in which the bot will be working.

To set the training phrase in the code, we defined the static variable “text.” We are going to run the function in the last line of code.

Sample response

After executing the above function, the Dialogflow API detects intent based on the text entered by the user. As a response to our request, it sends an object.

Below is the sample response generated by the Dialogflow API. To receive a response to our message, we must obtain the fulfillment text from the received response object.

				
					{
 "responseId": "d91c430d-7b4e-44c1-a5f5-f4563fdd4e6f",
 "queryResult": {
   "text": "Hello",
   "languageCode": "en",
   "responseMessages": [
     {
       "text": {
         "text": [
           "Welcome to Dialogflow CX."
         ]
       }
     }
   ],
   "currentPage": {
     "name": "projects/appointment-cxx/locations/us-central1/agents/fcc53e9b-5f20-421c-b836-4df53554526c/flows/00000000-0000-0000-0000-000000000000/pages/START_PAGE",
     "displayName": "Start Page"
   },
   "intent": {
     "name": "projects/appointment-cxx/locations/us-central1/agents/fcc53e9b-5f20-421c-b836-4df53554526c/intents/00000000-0000-0000-0000-000000000000",
     "displayName": "Default Welcome Intent"
   },
   "intentDetectionConfidence": 1,
   "diagnosticInfo": {
     "Transition Targets Chain": [],
     "Session Id": "test_1",
     "Alternative Matched Intents": [
       {
         "Type": "NLU",
         "Score": 1,
         "DisplayName": "Default Welcome Intent",
         "Id": "00000000-0000-0000-0000-000000000000",
         "Active": True
       }
     ],
     "Execution Sequence": [
       {
         "Step 1": {
           "Type": "INITIAL_STATE",
           "InitialState": {
             "FlowState": {
               "FlowId": "00000000-0000-0000-0000-000000000000",
               "Name": "Default Start Flow",
               "PageState": {
                 "Status": "ENTERING_PAGE",
                 "Name": "Start Page",
                 "PageId": "START_PAGE"
               },
               "Version": 0
             },
             "MatchedIntent": {
               "Type": "NLU",
               "Score": 1,
               "DisplayName": "Default Welcome Intent",
               "Active": True,
               "Id": "00000000-0000-0000-0000-000000000000"
             }
           }
         }
       },
       {
         "Step 2": {
           "FunctionExecution": {
             "Responses": [
               {
                 "text": {
                   "redactedText": [
                     "Welcome to Dialogflow CX."
                   ],
                   "text": [
                     "Welcome to Dialogflow CX."
                   ]
                 },
                 "responseType": "HANDLER_PROMPT",
                 "source": "VIRTUAL_AGENT"
               }
             ]
           },
           "Type": "STATE_MACHINE",
           "StateMachine": {
             "FlowState": {
               "FlowId": "00000000-0000-0000-0000-000000000000",
               "PageState": {
                 "Status": "TRANSITION_ROUTING",
                 "Name": "Start Page",
                 "PageId": "START_PAGE"
               },
               "Version": 0,
               "Name": "Default Start Flow"
             },
             "FlowLevelTransition": True,
             "TriggeredIntent": "Default Welcome Intent",
             "TriggeredTransitionRouteId": "f48cf8b5-c147-42a2-b967-12d2a3c0fcad"
           }
         }
       },
       {
         "Step 3": {
           "Type": "STATE_MACHINE",
           "StateMachine": {
             "FlowState": {
               "FlowId": "00000000-0000-0000-0000-000000000000",
               "Name": "Default Start Flow",
               "PageState": {
                 "Status": "TRANSITION_ROUTING",
                 "Name": "Start Page",
                 "PageId": "START_PAGE"
               },
               "Version": 0
             }
           }
         }
       }
     ],
     "Triggered Transition Names": [
       "f48cf8b5-c147-42a2-b967-12d2a3c0fcad"
     ]
   },
   "match": {
     "intent": {
       "name": "projects/appointment-cxx/locations/us-central1/agents/fcc53e9b-5f20-421c-b836-4df53554526c/intents/00000000-0000-0000-0000-000000000000",
       "displayName": "Default Welcome Intent"
     },
     "resolvedInput": "Hello",
     "matchType": "INTENT",
     "confidence": 1
   }
 },
 "responseType": "FINAL"
}
				
			

Voice Detection using Dialogflow API

Dialogflow API is also able to detect intents on the basis of the Speech audio inputs with help of their function given in the documentation.

It takes similar inputs as the text version, the only change is instead of taking query text we pass the path of the audio file which needs to be processed.

It is able to take audio files as an input and convert the audio to a query text which is further passed on to the Dialogflow. The response is received with all the required responses and the response for our audio input.

The function we used to detect the audio file is shown below, taken from the Dialogflow API documentation. The audio file must be in a “wav” format and in a mono channel otherwise code will through an error.

Refer below script to convert the audio into wav format.

				
					from pydub import AudioSegment

input_audio = AudioSegment.from_wav("YOUR-AUDIO-FILE-PATH")
input_audio = input_audio.set_channels(1)
input_audio.export("YOUR-AUDIO-FILE-PATH", format="wav")
				
			
				
					import uuid

from google.cloud.dialogflowcx_v3.services.agents import AgentsClient
from google.cloud.dialogflowcx_v3.services.sessions import SessionsClient
from google.cloud.dialogflowcx_v3.types import audio_config
from google.cloud.dialogflowcx_v3.types import session

def detect_intent_audio(agent, session_id, audio_file_path, language_code):
   """Returns the result of detect intent with an audio file as input.

   Using the same `session_id` between requests allows continuation
   of the conversation."""
   session_path = f"{agent}/sessions/{session_id}"
   print(f"Session path: {session_path}\n")
   client_options = None
   agent_components = AgentsClient.parse_agent_path(agent)
   location_id = agent_components["location"]
   if location_id != "global":
       api_endpoint = f"{location_id}-dialogflow.googleapis.com:443"
       print(f"API Endpoint: {api_endpoint}\n")
       client_options = {"api_endpoint": api_endpoint}
   session_client = SessionsClient(client_options=client_options)


   input_audio_config = audio_config.InputAudioConfig(
       audio_encoding=audio_config.AudioEncoding.AUDIO_ENCODING_LINEAR_16,
      
   )


   with open(audio_file_path, "rb") as audio_file:
       input_audio = audio_file.read()


   audio_input = session.AudioInput(config=input_audio_config, audio=input_audio)
   query_input = session.QueryInput(audio=audio_input, language_code=language_code)
   request = session.DetectIntentRequest(session=session_path, query_input=query_input)
   response = session_client.detect_intent(request=request)

   print("=" * 20)
   print(f"Query text: {response.query_result.transcript}")
   response_messages = [
       " ".join(msg.text.text) for msg in response.query_result.response_messages
   ]
   print(f"Response text: {' '.join(response_messages)}\n")
  
project_id = "YOUR-PROJECT-ID"
location_id = "YOUR-LOCATION-ID"
agent_id = "YOUR-AGENT-ID"
agent = f"projects/{project_id}/locations/{location_id}/agents/{agent_id}"
session_id = str(uuid.uuid4())
audio_file_path = "YOUR-AUDIO-FILE-PATH"
language_code = "en-us"<br>
detect_intent_audio(agent, session_id, audio_file_path, language_code)
				
			

Sample response for audio

After execution of the above function, below is the sample response generated from Dialogflow API for the given audio.
Query text: hello
Response text: Welcome to Dialogflow CX.

If you need the JSON response you can get it by printing the response variable. These JSON replies from the Dialogflow API can enable you to integrate with other chatbots. We hope that this documentation will assist you in using the Dialogflow API with a chatbot.

Please let us know if you have found any difficulties in implementing the above functions in the comments section. We would be glad to help you. If you are looking for Chatbot Development or Natural Language Processing services then do contact us or send your requirement at letstalk@pragnakalp.com. We would be happy to offer our expert services. 

Write a comment

Your email address will not be published. Required fields are marked *

Want to talk to an Expert Developer?

Our experts in Generative AI, Python Programming, and Chatbot Development can help you build innovative solutions and scale your business faster.