The Following guide is going to present customization for the Watson Assistant on how to implement functions specifically for IBM Watson. It is intended for users who are familiar with IBM Watson cloud dashboard. Continue if you are familiar with and have access to IBM Watson cloud dashboard.
Sending Native Content
Watson Assistant allows the user to define native response types to the dialog nodes. The supported Watson Assistant native types include Video, Audio, Image, Iframe, Option, Pause, and Text. Users can define single or multiple native content per dialog. The native content types can be defined with Watson wizard or using the JSON editor. (Figure 2.1 shows how to access both ways in IBM Watson dialog nodes.)
Figure 2.1 IBM Watson Dialog Native Rich Content Wizard and JSON Editor
Please note The response type Connect to human agent
and Channel Transfer
are not supported by Third-part bots connector
If you use JSON Editor then the usual body of the native content in dialog node is as follows:
{
"output": {
"generic": [
// Here comes array of objects of different Watson native contents that you can define.
]
}
}
Image
Users can define Image type using the IBM Watson assistant dashboard. To do this, a dialog node will need to be selected that will hold the image response. Click on the "Add response type" and select Image from the select box as shown in Figure 2.2.
Figure 2.2 Response type of Image is highlighted
Once the image type is selected you will be asked to fill in the information. The "Image Source" URL must be provided. You can also describe the image title and description (an example filled form is shown in Figure 2.3).
Image URLs must be added to a whitelist via internal LivePerson configuration (Houston: messaging.rich.content.valid.urls
).
Please note that you must add all possible domains to this list manually as wildcards are not supported. Moreover, All domains must be HTTPS secure.
Figure 2.3 Image fields filled example
If you are using JSON editor you can add an Image type by posting the following JSON. Please make sure to change "source", "title" and "description" property with your data.
{
"output": {
"generic": [
{
"source": "https://images.pexels.com/photos/699122/pexels-photo-699122.jpeg",
"title": "A nice Iphone",
"description": "showing initial details about phone",
"response_type": "image"
}
]
}
}
List
Users can define List type using the IBM Watson assistant dashboard. To do this, a dialog node will need to be selected that will hold the list response. Click on the "Add response type" and select Option from the select box as shown in Figure 2.4.
Figure 2.4 Response type of List is highlighted
Once the "Option" is selected the form that needs to be filled will be shown. You must provide "Title" and also "Description". Furthermore, different choices of options can be added by clicking the "Add option" button. Once the button is clicked you will be asked to put a label of option and value. Make sure you fill both of them (example filled form shown in Figure 2.5).
Figure 2.5 List fields filled example
If you are using JSON Editor then you have following structure of List. Note that "options" property is array of objects which holds the items for choosing are presented to the user.
{
"output": {
"generic": [
{
"title": "",
"description": "",
"options": [
// Here comes the list of options you want to present to user
],
"response_type": "option"
}
]
}
}
An example list filled with two options can be seen below. Please note that within the options object, "text" (value->input->text) is the value that you set for an option.
{
"output": {
"generic": [
{
"title": "Which phone type you like?",
"description": "Question for asking which phone OS is preferred",
"options": [
{
"label": "iOS",
"value": {
"input": {
"text": "ios"
}
}
},
{
"label": "Android",
"value": {
"input": {
"text": "android"
}
}
}
],
"response_type": "option"
}
]
}
}
Pause/Delay
Note: Using the delay as a single/sole response from the bot to the consumer, is effectively a ‘no response’ action. Using this allows the bot to receive a consumer message without responding to the consumer.
Users can define a Pause type if they want to send some delay in responding. For adding this content type, the dialog node will need to select that will hold a pause response. Click on the "Add response type" and select the Pause option as shown in Figure 2.6
Figure 2.6 Response type of Pause is highlighted
Once the "Pause" is selected the form will ask you to provide the duration (unit is in milliseconds). This allows the conversation to be paused for the amount of time defined in the "Duration" field. Moreover, If you want to show a user an indication of typing you can select that with the Typing Indicator radio box. (example filled form is shown in Figure 2.7). This will show an indication like "Agent is typing…" for the amount of time of delay that is set in "Duration".
Figure 2.7 Pause fields filled example
If you are using JSON Editor you can use the following JSON structure to define a Pause content type. This example will pause for 5 milliseconds with the typing indication on.
{
"output": {
"generic": [
{
"time": 5000,
"typing": true,
"response_type": "pause"
}
]
}
}
Text
Users can define a Text type to send some textual response. For adding this type of dialog node will need to select that will hold text response. Click on the "Add response type" and select "Text" option as shown in Figure 2.8
Figure 2.8 Response type of Text is highlighted
Once the "Text" is selected the form will allow you to add the response texts. You can add multiple response variations (an example filled form is shown in Figure 2.9).
Figure 2.9 Text fields filled example
If you are using JSON Editor you can use the following JSON structure to create text responses. The example below shows two text responses defined that will come sequentially.
{
"output": {
"generic": [
{
"response_type": "text",
"values": [
{
"text": "Hi Good Morning!"
},
{
"text": "Hi Good Evening!"
}
],
"selection_policy": "sequential"
}
]
}
}
Defining multiple responses with Watson Native content
Users can define a response with various content types. The following example shows a similar case using JSON Editor. The response will first send the text. Afterward, it will pause for 5 seconds and then finally send an image.
{
"output": {
"generic": [
{
"values": [
{
"text": "Hi Good Morning!"
},
{
"text": "Hi Good Evening!"
}
],
"response_type": "text",
"selection_policy": "sequential"
},
{
"time": 5000,
"typing": true,
"response_type": "pause"
},
{
"title": "iPhone 8",
"source": "https://cdn.bgr.com/2016/08/iphone-8-concept.jpg",
"description": "iPhone 8 concept",
"response_type": "image"
}
]
}
}
Sending Rich Content (Structured Content)
If Images are sent in Rich Content, then their URLs must be added to a whitelist via internal LivePerson
configuration (Houston: messaging.rich.content.valid.urls
). Please note that you must add all possible
domains to this list manually as wildcards are not supported. Moreover, All domains must be HTTPS secure.
The core Conversational Cloud platform supports the use of rich/structured content. For more information on the format and functionality available, please refer to the documentation found here. As a result, the Bot Connector also supports this.
To send structured content via Watson Assistant you will need to send a custom JSON payload as a user_defined
response type. To do this, you will need to select the dialog node that will hold the structured content.
From there, under the section Assistant responds: Click the three vertical dots to open the list of options and select Open JSON Editor (Figure 2.10)
Figure 2.10 Watson Assistant Dialog JSON Editor
In the JSON Editor you will need to add your custom JSON response (Figure 2.11).
Figure 2.11 Watson Assistant JSON Editor
There is a strict JSON structure for the response that must be used. The JSON structure for a user_defined response can be found below in Figure 2.12. An example with a sample JSON that uses a standard Structured Content card with a button option can be seen in Figure 2.13.
{
"output": {
"generic":[
{
"response_type": "user_defined",
"user_defined": {
"metadata": [
{
"id": "1234",
"type": "ExternalId"
}
],
"structuredContent": {}
}
}
]
}
}
Figure 2.12 Structured Content Watson user_defined JSON Structure (JSON Editor should contain this object structure for Rich Content)
{
"output": {
"generic": [
{
"response_type": "user_defined",
"user_defined": {
"metadata": [
{
"id": "1234",
"type": "ExternalId"
}
],
"structuredContent": {
"type": "vertical",
"elements": [
{
"type": "button",
"click": {
"actions": [
{
"text": "Recommend me a movie, please",
"type": "publishText"
}
]
},
"title": "Recommend a movie"
}
]
}
}
}
]
}
}
Figure 2.13 Structured Content Watson user_defined JSON Example (JSON Editor should contain this object structure for Rich Content)
Sending Quick Replies (Structured Content)
Please note Quick Replies are only supported in Messaging Conversations.
Quick Replies are a special kind of Structured Content. The content should be added to the quickReplies property of the structuredContent object and there also a message be included in the structuredContent object. This message will be sent to the customer along with the Quick Replies. For detailed information on Quick Replies check out the documentation for the specific channel (Mobile SDK and Web, Facebook Messenger, Google RCS Business Messaging).
{
"structuredContent": {
"quickReplies": {
"type": "quickReplies",
"itemsPerRow": 8,
"replies": [
{
"type": "button",
"tooltip": "yes I do",
"title": "yes",
"click": {
"actions": [
{
"type": "publishText",
"text": "yep"
}
],
"metadata": [
{
"type": "ExternalId",
"id": "Yes-1234"
}
]
}
},
{
"type": "button",
"tooltip": "No!",
"title": "No!",
"click": {
"actions": [
{
"type": "publishText",
"text": "No!"
}
],
"metadata": [
{
"type": "ExternalId",
"id": "No-4321"
}
]
}
}
]
},
"message": "Message to send before sending QuickReplies content"
},
"metadata": [
{
"id": "1234",
"type": "ExternalId"
}
]
}
Figure 2.14 Quick Replies StructuredContent example.
Bot Actions
Please note we only support ONE ACTION per response
Transfer / Escalations
Naming conventions: Before going into actions and skills is the naming convention between each. All non-escalation actions are defined by using underscores. For example, in the case of closing a conversation, the action name returned by Watson needs to be CLOSE_CONVERSATION. Further down the line, if any additional functionality is added that can be called by an action from the AI, it will follow the same naming convention. For escalations, the naming convention for these skills should use a "-" instead of whitespace. Furthermore, if transferring to a skill, specifically assigned to bots, it’s best practice to prefix the skill name with "BOT-" within Conversational Cloud.
Transfer To Skill
This option transfers the conversation to the next available agent using the provided skill.
At the beginning of a chat session or when a messaging bot logs in, all the list of enabled skills on the account are retrieved, keyed by name and stored. When a transfer is requested by the bot, the target skill's name is searched in the stored list and its ID is retrieved and escalated to. In regards to Watson Assistant, this should be configured in the following way:
Parameters: ‘skill’ (case-sensitive) with ‘value’ of skill name (case-sensitive) in Conversational Cloud.
Figure 2.15 Watson Assistant JSON response for transfer chat/conversation to a skill
{
"output": {
"generic": [
{
"values": [
{
"text": "Escalating to a human"
}
],
"response_type": "text",
"selection_policy": "sequential"
}
]
},
"actions": [
{
"name": "TRANSFER",
"type": "client",
"parameters": {
"skill": "human_skill"
},
"result_variable": "none"
}
]
}
In the example above, you can see the actions
array. Inside the array, we define an escalation skill name
in the skill
parameter. This will be sent in an object to the chat/messaging connector, which will grab the
skillId
from a previously stored array based on the name, and escalate.
Transfer to Agent
This feature is depending on permissions
This option transfers the conversation to the particular agent matching the provided agentId and skill. If the agent is not available, the conversation will be transferred to an available agent with the same skill
Parameters: ‘skill’ (case-sensitive) with ‘value’ of skill name (case-sensitive) in Conversational Cloud. ‘agentId (case-sensitive) with ‘value’ of agentId in Conversational Cloud.
Figure 2.16 Watson Assistant JSON response for transfer chat/conversation to a specific agent
In the Then respond with:
JSON editor block, use the following:
{
"output": {
"generic": [
{
"values": [
{
"text": "Escalating to an agent"
}
],
"response_type": "text",
"selection_policy": "sequential"
}
]
},
"actions": [
{
"name": "TRANSFER",
"type": "client",
"parameters": {
"skill": "human_skill",
"agentId": "4129463410"
},
"result_variable": "none"
}
]
}
Close Conversation
To close a chat or messaging conversation, we utilize the action object as we did for a transfer (see Figure 2.16). In Figure 2.17 below, the Watson Assistant JSON response should be mirrored as follows:
{
"output": {
"generic": [
{
"values": [
{
"text": "Thanks for chatting with us today!"
}
],
"response_type": "text",
"selection_policy": "sequential"
}
]
},
"actions": [
{
"name": "CLOSE_CONVERSATION",
"type": "client",
"result_variable": "none"
}
]
}
Figure 2.17 Watson Assistant JSON response for closing chat/conversation
To close a conversation without triggering a post-conversation survey, see the example below in Figure 2.18:
{
"output": {
"generic": [
{
"values": [
{
"text": "Thanks for chatting with us today!"
}
],
"response_type": "text",
"selection_policy": "sequential"
}
]
},
"actions": [
{
"name": "CLOSE_CONVERSATION",
"type": "client",
"parameters": {
"withoutPcs": true
},
"result_variable": "none"
}
]
}
Figure 2.18 Watson Assistant JSON response for closing conversations without pcs
Change Time To Response of Conversation
Change the TTR of a conversation based on the action response of Watson. There are 3 different types. "URGENT", "NORMAL", "PRIORITIZED". The time values of these are defined in the Agent Workspace.
{
"output": {
"generic": [
{
"values": [
{
"text": "Sure thing! Change the TTR to Urgent."
}
],
"response_type": "text",
"selection_policy": "sequential"
}
]
},
"actions": [
{
"name": "CHANGE_TTR",
"type": "client",
"parameters": {
"ttrType": "URGENT"
},
"result_variable": "none"
}
]
}
Figure 2.19 Watson JSON response for changing TTR