The AI Assistant Custom Widget is designed to enhance the request analysis workflow by using the power of multiple AI providers.

The widget currently supports three AI providers :
Google's Gemini (Online)
Open AI's GPT-4 (Online)
Ollama (Local LLMs)
The AI Assistant custom widget in ServiceDesk Plus offers you benefits:
Ask anything about a request (Ask AI)
Accurate image analysis (Gemini-only)
Display relevant solutions within ServiceDesk Plus (Gemini-only)
Streamlined post-incident review (PIR)
Analyzes the current request details, including the requester's sentiment, and provides a comprehensive breakdown of the request's key components.
Identifies critical information, priorities, and potential concerns.
Assists the technician in understanding complex requests.

Generates a structured, step-by-step resolution strategy taking into account the request's context and best practices.
Standardizes resolution approaches across teams by including time estimates and resource requirements.

Allows users to question AI in natural language about any aspect of the request.

Helps clarify complex aspects or technical details and supports decision-making with AI-powered insights.

Users can upload images, ask the AI to assist them, and receive AI's analysis based on the request context.

Combines visual and textual context to help technicians understand the issues at a glance.

Searches for solutions accessible to technicians within ServiceDesk Plus and finds potentially relevant ones to the request.
Reduces resolution time by finding the right solution from the existing knowledge base.

Creates detailed post-incident reports.
Includes incident timeline, impact analysis, and resolution steps while capturing key metrics and learnings from the incident, streamlining the incident documentation process.
Features an Export as PDF button to download the generated PIR seamlessly.

Gemini provider is configured in config.json in the widget zip. Each provider has its own configuration block.
Gemini configuration requires an API key. To get the Gemini API key:
Go to https://aistudio.google.com/live and sign-in with your Google credentials.
Click Get API key at the top-left.
Click Create API key to generate your unique API key.

Copy the generated API key.
Unzip with widget files.
Open config.json in a text editor.
Paste the API key as the value in the Gemini's API_KEY parameter.

After you update the parameter, zip all the widget files together.
Go to Admin > Developer Space > Custom Widget > + Custom Widget.
Upload and save the zipped file to use Gemini as the AI provider for this widget.
Model Used: Gemini 1.5 Flash
Capabilities:
Text analysis
Image analysis
Uses direct REST API calls
Supports multimodal inputs (text + images)
To configure a connection for OpenAI in ServiceDesk Plus,
Go to Admin > Developer Space > Connections > Custom Services > Create Service.
Enter the Service name and Service Link Name as openAI.
Set the Authentication Type to Basic Authentication.
Click Create Service.
After you create the service, click Create Connection.

Enter the Connection name and Connection Link Name as openAI (similar to the service name).
Click Create and Connect.
Click Connect. You will be prompted to enter the username and password.

Enter " " for username (blank username), and the password should be the OpenAI API Key.
To create/get your OpenAI API Key, check this page: https://platform.openai.com/api-keys. You can use your existing API Key or create a new secret key and then use it as the password.
After you fill out the username and password, click Connect.


Go to the widget zip and open the plugin-manifest.json file.
Paste the JSON under the connections array and save the file.

Select and zip all the widget files.
Upload it under Admin > Developer Space > Custom Widget to make use of OpenAI as your AI provider for this widget.
Model used: GPT-4
Capabilities:
Text analysis
System prompt sets context as "helpful assistant that analyzes user requests"
Messages are structured in chat format with system and user roles.
API Endpoint: https://api.openai.com/v1/chat/completions
To use LLMs that run entirely on your local server machine, click here.