mirror of
https://github.com/hoshikawa2/agent_oci_automation.git
synced 2026-03-06 02:10:37 +00:00
refactoring. now the solution is chat
This commit is contained in:
7
.idea/codeStyles/Project.xml
generated
Normal file
7
.idea/codeStyles/Project.xml
generated
Normal file
@@ -0,0 +1,7 @@
|
||||
<component name="ProjectCodeStyleConfiguration">
|
||||
<code_scheme name="Project" version="173">
|
||||
<ScalaCodeStyleSettings>
|
||||
<option name="MULTILINE_STRING_CLOSING_QUOTES_ON_NEW_LINE" value="true" />
|
||||
</ScalaCodeStyleSettings>
|
||||
</code_scheme>
|
||||
</component>
|
||||
5
.idea/codeStyles/codeStyleConfig.xml
generated
Normal file
5
.idea/codeStyles/codeStyleConfig.xml
generated
Normal file
@@ -0,0 +1,5 @@
|
||||
<component name="ProjectCodeStyleConfiguration">
|
||||
<state>
|
||||
<option name="PREFERRED_PROJECT_CODE_STYLE" value="Default" />
|
||||
</state>
|
||||
</component>
|
||||
24
.oca/custom_code_review_guidelines.txt
Normal file
24
.oca/custom_code_review_guidelines.txt
Normal file
@@ -0,0 +1,24 @@
|
||||
# Sample guideline, please follow similar structure for guideline with code samples
|
||||
# 1. Suggest using streams instead of simple loops for better readability.
|
||||
# <example>
|
||||
# *Comment:
|
||||
# Category: Minor
|
||||
# Issue: Use streams instead of a loop for better readability.
|
||||
# Code Block:
|
||||
#
|
||||
# ```java
|
||||
# // Calculate squares of numbers
|
||||
# List<Integer> squares = new ArrayList<>();
|
||||
# for (int number : numbers) {
|
||||
# squares.add(number * number);
|
||||
# }
|
||||
# ```
|
||||
# Recommendation:
|
||||
#
|
||||
# ```java
|
||||
# // Calculate squares of numbers
|
||||
# List<Integer> squares = Arrays.stream(numbers)
|
||||
# .map(n -> n * n) // Map each number to its square
|
||||
# .toList();
|
||||
# ```
|
||||
# </example>
|
||||
807
README.md
807
README.md
@@ -89,434 +89,247 @@ Download and install the [requirements.txt](./files/requirements.txt) file using
|
||||
|
||||
pip install -r requirements.txt
|
||||
|
||||
## Understand the Code
|
||||
## 🚀 Concepts
|
||||
|
||||
You can download the source code here:
|
||||
### Multi-Agent Communication Protocol (MCP)
|
||||
MCP provides a standardized way to expose tools (functions) from a backend server to AI agents.
|
||||
In this demo:
|
||||
- **`server_mcp.py`** → Exposes OCI-related tools (`find_compartment`, `resolve_image`, `resolve_shape`, etc.).
|
||||
- **`webchat.py`** → Provides a webchat interface where the user interacts with the agent.
|
||||
|
||||
- [agent_over_mcp.py](./files/agent_over_mcp.py)
|
||||
- [server_mcp.py](./files/server_mcp.py)
|
||||
- [requirements.txt](./files/requirements.txt)
|
||||
- [config file](./files/config)
|
||||
### Parameter Resolution
|
||||
Parameters are divided into two categories:
|
||||
|
||||
- **Literal parameters** (extracted directly from user request):
|
||||
- `display_name`, `ocpus`, `memoryInGBs`
|
||||
- **Resolvable parameters** (require lookup via MCP tools):
|
||||
- `compartment_id`, `subnet_id`, `availability_domain`, `image_id`, `shape`
|
||||
|
||||
### Agent code
|
||||
The pipeline follows **Schema A → Schema B** flow:
|
||||
1. Schema A: partial resolution, with `candidates` or `ask` fields if ambiguity exists.
|
||||
2. Schema B: final payload, ready for compute instance creation.
|
||||
|
||||
This script builds an OCI Operations Agent using LangChain, LangGraph, and the MCP protocol.
|
||||
It connects to an MCP server that exposes tools for managing OCI resources and uses an Oracle Cloud Generative AI model to interact in natural language.
|
||||
The agent follows the ReAct pattern (Reason + Act) to alternate between reasoning and tool usage.
|
||||
---
|
||||
|
||||
It imports the required libraries.
|
||||
|
||||
Standard Python modules for system interaction, file I/O, JSON parsing, and asynchronous execution.
|
||||
|
||||
LangChain for prompt and message abstractions.
|
||||
|
||||
The OCI Generative AI wrapper for LangChain to connect to Oracle-hosted LLMs.
|
||||
|
||||
LangGraph’s prebuilt ReAct agent builder.
|
||||
|
||||
The MCP client adapter to connect and fetch tool definitions from MCP servers.
|
||||
|
||||
```python
|
||||
import sys
|
||||
import os
|
||||
import json
|
||||
import asyncio
|
||||
|
||||
from langchain_core.prompts import ChatPromptTemplate
|
||||
from langchain_community.chat_models.oci_generative_ai import ChatOCIGenAI
|
||||
from langgraph.prebuilt import create_react_agent
|
||||
from langchain_core.messages import HumanMessage, AIMessage
|
||||
from langchain_mcp_adapters.client import MultiServerMCPClient
|
||||
## 📂 Project Structure
|
||||
|
||||
```
|
||||
|
||||
It loads external configuration from ./config to avoid hardcoding environment-specific values.
|
||||
This file contains endpoint URLs, OCI compartment ID, profile name, and CLI paths.
|
||||
|
||||
```python
|
||||
# Configuration File
|
||||
with open("./config", "r") as f:
|
||||
config_data = json.load(f)
|
||||
|
||||
├── server_mcp.py # MCP server exposing OCI tools
|
||||
├── webchat.py # Flask webchat app connected to MCP server
|
||||
├── README.md # Documentation (this file)
|
||||
```
|
||||
|
||||
It defines a minimal in-memory state to hold the conversation history.
|
||||
This will store all HumanMessage and AIMessage objects exchanged during the chat session.
|
||||
---
|
||||
|
||||
```python
|
||||
# Memory Management for the OCI Resource Parameters
|
||||
class MemoryState:
|
||||
def __init__(self):
|
||||
self.messages = []
|
||||
## 🛠️ Key Components
|
||||
|
||||
```
|
||||
### 1. `server_mcp.py`
|
||||
This script exposes MCP tools for resolving OCI resources. Example tools:
|
||||
- `find_compartment` → Locate compartments by name.
|
||||
- `find_subnet` → Locate subnets within a compartment.
|
||||
- `list_availability_domains` / `find_ad` → Resolve availability domains.
|
||||
- `resolve_image` → Find images (e.g., Oracle Linux 9).
|
||||
- `resolve_shape` → Match compute shapes (e.g., `VM.Standard.E4.Flex`).
|
||||
- `create_compute_instance` → Launches a VM using OCI CLI.
|
||||
|
||||
It creates a LangChain chat model for OCI Generative AI.
|
||||
The model is cohere.command-r-08-2024, configured with a low temperature for deterministic output and a maximum token limit for responses.
|
||||
Each tool returns structured JSON with either:
|
||||
- A **single match** (directly placed in parameters)
|
||||
- **Multiple matches** (returned in `candidates` for user choice)
|
||||
- **No matches** (returned as an `ask` prompt)
|
||||
|
||||
```python
|
||||
# Define the language model
|
||||
llm = ChatOCIGenAI(
|
||||
model_id="cohere.command-r-08-2024",
|
||||
service_endpoint=config_data["llm_endpoint"],
|
||||
compartment_id=config_data["compartment_id"],
|
||||
auth_profile=config_data["oci_profile"],
|
||||
model_kwargs={"temperature": 0.1, "top_p": 0.75, "max_tokens": 2000}
|
||||
)
|
||||
> 🔑 **Design principle:** literal parameters (name, OCPUs, memory) are never candidates.
|
||||
|
||||
```
|
||||
---
|
||||
|
||||
It builds the system prompt for the agent.
|
||||
The prompt sets the role, defines interaction and operational rules, and specifies output formatting guidelines.
|
||||
It includes a {messages} placeholder for injecting chat history dynamically.
|
||||
### 2. `webchat.py`
|
||||
The webchat provides a conversational interface.
|
||||
It integrates:
|
||||
- **Prompt Design** (instructions for literal/resolvable params, candidates, asks)
|
||||
- **Flask server** to handle user requests and responses
|
||||
- **Async MCP client** to call tools exposed in `server_mcp.py`
|
||||
|
||||
```python
|
||||
# Prompt
|
||||
prompt = ChatPromptTemplate.from_messages([
|
||||
("system", """
|
||||
You are an OCI Operations Agent with access to MCP tools (server `oci-ops`).
|
||||
Your goal is to provision and manage OCI resources **without requiring the user to know OCIDs**.
|
||||
|
||||
INTERACTION RULES:
|
||||
1) Wait until the user ask to create a resource
|
||||
2) If all the parameters has the ocid information, create the resource
|
||||
3) If all the parameters were filled by the user, create the resource
|
||||
4) If a parameter given is a name and needs to be converted to a OCID, search for it automatically
|
||||
5) If a parameter is missing, ask for the information
|
||||
6) Do not wait for a response from creation. Inform "Creation of resource is Done."
|
||||
|
||||
IMPORTANT RULES:
|
||||
1) Never invent OCIDs. Prefer to ask succinct follow-ups.
|
||||
2) Prefer to reuse defaults from memory when appropriate
|
||||
|
||||
OUTPUT STYLE:
|
||||
- Questions: short, one parameter at a time.
|
||||
- Show: mini-summary with final values.
|
||||
- Candidate lists: numbered, with name (type) — ocid — score when available.
|
||||
"""),
|
||||
("placeholder", "{messages}")
|
||||
])
|
||||
|
||||
```
|
||||
|
||||
It defines the asynchronous main() function.
|
||||
This function:
|
||||
|
||||
Creates an MCP client configured to run server_mcp.py for the oci-ops server using stdio transport.
|
||||
|
||||
Sets environment variables for the OCI CLI.
|
||||
|
||||
```python
|
||||
async def main():
|
||||
client = MultiServerMCPClient(
|
||||
{
|
||||
"oci-ops": {
|
||||
"command": sys.executable,
|
||||
"args": ["server_mcp.py"],
|
||||
"transport": "stdio",
|
||||
"env": {
|
||||
"PATH": os.environ.get("PATH", "") + os.pathsep + os.path.expanduser("~/.local/bin"),
|
||||
"OCI_CLI_BIN": config_data["OCI_CLI_BIN"],
|
||||
"OCI_CLI_PROFILE": config_data["oci_profile"],
|
||||
},
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
```
|
||||
|
||||
It retrieves the available tools from the MCP server.
|
||||
If no tools are found, the function exits; otherwise, it prints the tool names.
|
||||
|
||||
```python
|
||||
tools = await client.get_tools()
|
||||
if not tools:
|
||||
print("❌ No MCP tools were loaded. Please check if the server is running.")
|
||||
return
|
||||
|
||||
print("🛠️ Loaded tools:", [t.name for t in tools])
|
||||
|
||||
```
|
||||
|
||||
It initializes the memory state and creates the ReAct agent with LangGraph, using the LLM, tools, and prompt defined earlier.
|
||||
|
||||
```python
|
||||
# Creating the LangGraph agent with in-memory state
|
||||
memory_state = MemoryState()
|
||||
memory_state.messages = []
|
||||
|
||||
agent_executor = create_react_agent(
|
||||
model=llm,
|
||||
tools=tools,
|
||||
prompt=prompt,
|
||||
)
|
||||
|
||||
```
|
||||
|
||||
It starts an interactive conversation loop.
|
||||
Each user input is appended to memory, sent to the agent, and the agent’s reply is displayed.
|
||||
The loop ends when the user types quit or exit.
|
||||
|
||||
```python
|
||||
print("🤖 READY")
|
||||
while True:
|
||||
query = input("You: ")
|
||||
if query.lower() in ["quit", "exit"]:
|
||||
break
|
||||
if not query.strip():
|
||||
continue
|
||||
|
||||
memory_state.messages.append(HumanMessage(content=query))
|
||||
try:
|
||||
result = await agent_executor.ainvoke({"messages": memory_state.messages})
|
||||
new_messages = result.get("messages", [])
|
||||
|
||||
# Store new messages
|
||||
memory_state.messages.extend(new_messages)
|
||||
|
||||
print("Assist:", new_messages[-1].content)
|
||||
|
||||
formatted_messages = prompt.format_messages()
|
||||
|
||||
except Exception as e:
|
||||
print("Error:", e)
|
||||
|
||||
```
|
||||
|
||||
It runs the asynchronous main() function when the script is executed directly.
|
||||
|
||||
```python
|
||||
# Run the agent with asyncio
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
|
||||
```
|
||||
|
||||
### MCP Server
|
||||
|
||||
This script implements an MCP (Multi-Server Client Protocol) server using FastMCP for Oracle Cloud Infrastructure (OCI).
|
||||
It exposes several MCP tools to find and resolve OCI resources and to create compute instances using the oci CLI.
|
||||
It also includes helper functions for logging, parsing OCI config, normalizing and comparing strings, and running CLI commands.
|
||||
|
||||
It starts by importing all required libraries.
|
||||
These include standard Python modules for system commands, text normalization, config parsing, JSON, and subprocess execution, plus the FastMCP server class from the mcp.server.fastmcp package.
|
||||
|
||||
```python
|
||||
import re
|
||||
import shlex
|
||||
import subprocess
|
||||
import unicodedata
|
||||
from typing import Any, Dict, List, Optional, Tuple
|
||||
import os
|
||||
import json
|
||||
import configparser
|
||||
from mcp.server.fastmcp import FastMCP
|
||||
|
||||
```
|
||||
|
||||
It loads a config file containing runtime parameters such as the OCI CLI binary path and the OCI CLI profile name.
|
||||
|
||||
```python
|
||||
# Config File
|
||||
with open("./config", "r") as f:
|
||||
config_data = json.load(f)
|
||||
|
||||
```
|
||||
|
||||
It initializes a FastMCP server instance named oci-ops.
|
||||
This name is how clients (like the LangChain agent) will refer to this MCP server.
|
||||
|
||||
```python
|
||||
# FastMCP Server
|
||||
mcp = FastMCP("oci-ops")
|
||||
|
||||
```
|
||||
|
||||
It defines a helper function to append command lines and outputs to a log file (log.txt).
|
||||
This is used throughout the script to keep a record of all oci commands executed and their results.
|
||||
|
||||
```python
|
||||
def append_line(file_path: str, base: list):
|
||||
"""
|
||||
Save the sequence of commands in `base` to a text file.
|
||||
|
||||
Args:
|
||||
file_path (str): Path to the text file.
|
||||
base (list): List of command parts to save.
|
||||
"""
|
||||
with open(file_path, "a", encoding="utf-8") as f:
|
||||
command_line = " ".join(map(str, base))
|
||||
f.write(command_line + "\n")
|
||||
f.flush()
|
||||
|
||||
```
|
||||
|
||||
It declares a wrapper class OCI to run oci CLI commands with the configured profile and binary path.
|
||||
The run() method logs the command, executes it, logs stdout and stderr, and returns the result.
|
||||
|
||||
```python
|
||||
class OCI:
|
||||
def __init__(self, profile: Optional[str] = None, bin_path: Optional[str] = None):
|
||||
self.profile = config_data["oci_profile"]
|
||||
self.bin = config_data["OCI_CLI_BIN"]
|
||||
|
||||
def run(self, args: List[str]) -> Tuple[int, str, str]:
|
||||
try:
|
||||
base = [self.bin]
|
||||
if self.profile:
|
||||
base += ["--profile", self.profile]
|
||||
cmd = base + args
|
||||
append_line("log.txt", cmd)
|
||||
proc = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
|
||||
append_line("log.txt", proc.stdout)
|
||||
append_line("log.txt", proc.stderr)
|
||||
append_line("log.txt", "--------------------------")
|
||||
return proc.returncode, proc.stdout, proc.stderr
|
||||
except ex as Exception:
|
||||
append_line("log.txt", str(ex))
|
||||
|
||||
oci_cli = OCI(profile=config_data["oci_profile"])
|
||||
|
||||
```
|
||||
|
||||
It defines helpers to read the OCI config file, get the tenancy OCID, and safely parse JSON from strings.
|
||||
These are used by the MCP tools to supply default values and parse command results.
|
||||
|
||||
```python
|
||||
def _read_oci_config(profile: Optional[str]) -> Dict[str, str]:
|
||||
cfg_path = os.path.expanduser("~/.oci/config")
|
||||
cp = configparser.ConfigParser()
|
||||
if os.path.exists(cfg_path):
|
||||
cp.read(cfg_path)
|
||||
prof = config_data["oci_profile"]
|
||||
if cp.has_section(prof):
|
||||
return {k: v for k, v in cp.items(prof)}
|
||||
return {}
|
||||
|
||||
def _tenancy_ocid() -> Optional[str]:
|
||||
return _read_oci_config(config_data["oci_profile"]).get("tenancy")
|
||||
|
||||
def _safe_json(s: str) -> Any:
|
||||
try:
|
||||
return json.loads(s)
|
||||
except Exception:
|
||||
return {"raw": s}
|
||||
|
||||
```
|
||||
|
||||
It implements phonetic and fuzzy matching helpers for Brazilian Portuguese (pt-BR).
|
||||
These are used to match names that are similar in spelling or sound (e.g., when the user provides a partial resource name).
|
||||
|
||||
```python
|
||||
_consonant_map = {
|
||||
"b": "1", "f": "1", "p": "1", "v": "1",
|
||||
"c": "2", "g": "2", "j": "2", "k": "2", "q": "2", "s": "2", "x": "2", "z": "2",
|
||||
"d": "3", "t": "3",
|
||||
"l": "4",
|
||||
"m": "5", "n": "5",
|
||||
"r": "6",
|
||||
It enforces the **Schema A / Schema B contract**:
|
||||
- Schema A (when resolution is incomplete):
|
||||
```json
|
||||
{
|
||||
"parameters": {
|
||||
"compartment_id": null,
|
||||
"subnet_id": null,
|
||||
"availability_domain": null,
|
||||
"image_id": null,
|
||||
"shape": null,
|
||||
"ocpus": 2,
|
||||
"memoryInGBs": 16,
|
||||
"display_name": "test_vm"
|
||||
},
|
||||
"candidates": {
|
||||
"image_id": [
|
||||
{ "index": 1, "name": "Oracle-Linux-9.6-2025.09.16-0", "ocid": "ocid1.image....", "version": "2025.09.16", "score": 0.98 }
|
||||
]
|
||||
},
|
||||
"ask": "Which image do you want to use?"
|
||||
}
|
||||
```
|
||||
- Schema B (final payload for creation):
|
||||
```json
|
||||
{
|
||||
"compartmentId": "ocid1.compartment...",
|
||||
"subnetId": "ocid1.subnet...",
|
||||
"availabilityDomain": "IAfA:SA-SAOPAULO-1-AD-1",
|
||||
"imageId": "ocid1.image...",
|
||||
"displayName": "test_vm",
|
||||
"shape": "VM.Standard.E4.Flex",
|
||||
"shapeConfig": { "ocpus": 2, "memoryInGBs": 16 }
|
||||
}
|
||||
|
||||
def _normalize(text: str) -> str:
|
||||
text = unicodedata.normalize("NFKD", text)
|
||||
text = "".join(ch for ch in text if not unicodedata.combining(ch))
|
||||
text = re.sub(r"[^a-zA-Z0-9 ]+", " ", text)
|
||||
return re.sub(r"\s+", " ", text).strip().lower()
|
||||
|
||||
def ptbr_soundex(word: str, maxlen: int = 6) -> str:
|
||||
w = _normalize(word)
|
||||
if not w:
|
||||
return ""
|
||||
first_letter = w[0]
|
||||
digits = []
|
||||
prev = ""
|
||||
for ch in w[1:]:
|
||||
if ch in "aeiouhwy ":
|
||||
code = ""
|
||||
else:
|
||||
code = _consonant_map.get(ch, "")
|
||||
if code and code != prev:
|
||||
digits.append(code)
|
||||
prev = code
|
||||
code = (first_letter + "".join(digits))[:maxlen]
|
||||
return code.ljust(maxlen, "0")
|
||||
|
||||
from difflib import SequenceMatcher
|
||||
|
||||
def similarity(a: str, b: str) -> float:
|
||||
return SequenceMatcher(None, _normalize(a), _normalize(b)).ratio()
|
||||
|
||||
```
|
||||
|
||||
It declares MCP tools using the @mcp.tool() decorator.
|
||||
Each function implements a specific OCI operation or search, using the oci_cli.run() helper.
|
||||
---
|
||||
|
||||
## 🔎 Code Walkthrough
|
||||
|
||||
## Overall Architecture
|
||||
|
||||
The system consists of two main modules:
|
||||
|
||||
- **`server_mcp.py`**: MCP server that exposes tools to resolve OCI parameters (e.g., `find_compartment`, `resolve_image`, `resolve_shape`).
|
||||
- **`webchat.py`**: Flask-based frontend that receives user natural input, builds the **complex prompt** for the LLM, manages parameter state, and calls MCP tools when required.
|
||||
|
||||
### Operation Flow
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
subgraph User["🧑 User"]
|
||||
U1["Natural input (e.g., 'create VM called X with 2 OCPUs')"]
|
||||
end
|
||||
|
||||
subgraph Web["💻 webchat.py (Frontend Flask + LLM)"]
|
||||
P["Complex System Prompt"]
|
||||
M["LLM Inference (OCI GenAI)"]
|
||||
S["State Manager (parameters + candidates)"]
|
||||
end
|
||||
|
||||
subgraph Server["🔧 server_mcp.py (MCP Tools)"]
|
||||
T1["find_compartment"]
|
||||
T2["find_subnet"]
|
||||
T3["resolve_shape"]
|
||||
T4["resolve_image"]
|
||||
T5["create_compute_instance"]
|
||||
end
|
||||
|
||||
U1 --> P --> M
|
||||
M -->|JSON Schema A| S
|
||||
S -->|missing resolvable param| Server
|
||||
Server --> S
|
||||
S -->|final JSON Schema B| T5
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## webchat.py — Key Highlights
|
||||
|
||||
### 1. **Prompt**
|
||||
|
||||
The `system_text` defines a strict **multi-step pipeline** for the LLM.
|
||||
It ensures consistent behavior and prevents hallucination:
|
||||
|
||||
- **Step 1 (Extraction)** → Extract all literal values from user input (name, OCPUs, memory, etc).
|
||||
- **Step 2 (Classification)** → Separate **literal parameters** (fixed) from **resolvable ones** (require OCID lookup).
|
||||
- **Step 3 (Resolution)** → Call MCP tools to resolve resolvable parameters. Generate candidates if ambiguous.
|
||||
- **Step 4 (Assembly)** → Return Schema A (partial, with candidates/ask) or Schema B (final, ready to create).
|
||||
|
||||
👉 **Why so detailed?**
|
||||
Because LLMs tend to **invent keys or mix formats**. This prompt enforces:
|
||||
- Literals never become candidates.
|
||||
- Resolvables always have fallback (candidates or ask).
|
||||
- Final output is deterministic: Schema A or Schema B.
|
||||
|
||||
---
|
||||
|
||||
### 2. **State Management**
|
||||
|
||||
```python
|
||||
# ------------------------------
|
||||
# MCP Tools
|
||||
# ------------------------------
|
||||
@mcp.tool()
|
||||
async def find_subnet(query_text: str) -> dict:
|
||||
"""
|
||||
Find the subnet ocid by the name and the compartment ocid
|
||||
"""
|
||||
structured = f"query subnet resources where displayName =~ '.*{query_text}*.'"
|
||||
code, out, err = oci_cli.run(["search","resource","structured-search","--query-text", structured])
|
||||
if code != 0:
|
||||
return {"status":"error","stderr": err, "stdout": out}
|
||||
data = json.loads(out)
|
||||
items = data.get("data",{}).get("items",[])
|
||||
return {"status":"ok","data": items}
|
||||
if "parameters" not in memory:
|
||||
memory["parameters"] = {
|
||||
"compartment_id": None,
|
||||
"subnet_id": None,
|
||||
"availability_domain": None,
|
||||
"image_id": None,
|
||||
"shape": None,
|
||||
"ocpus": None,
|
||||
"memoryInGBs": None,
|
||||
"display_name": None
|
||||
}
|
||||
```
|
||||
|
||||
@mcp.tool()
|
||||
async def list_availability_domains(compartment_ocid: Optional[str] = None) -> Dict[str, Any]:
|
||||
"""List ADs with `oci iam availability-domain list`."""
|
||||
cid = compartment_ocid or _tenancy_ocid()
|
||||
if not cid:
|
||||
return {"status": "error", "error": "Missing tenancy compartment OCID."}
|
||||
code, out, err = oci_cli.run(["iam", "availability-domain", "list", "--compartment-id", cid])
|
||||
if code != 0:
|
||||
return {"status": "error", "stderr": err, "stdout": out}
|
||||
return {"status": "ok", "data": _safe_json(out)}
|
||||
- Keeps track of partially resolved parameters.
|
||||
- Supports **multi-turn conversations** (user can refine parameters incrementally).
|
||||
- Prevents context loss between steps.
|
||||
|
||||
@mcp.tool()
|
||||
async def find_ad(name_or_hint: str, compartment_ocid: Optional[str] = None) -> Dict[str, Any]:
|
||||
"""Find the AD by a name (ex.: 'SAOPAULO-1-AD-1')."""
|
||||
lst = await list_availability_domains(compartment_ocid)
|
||||
if lst.get("status") != "ok":
|
||||
return lst
|
||||
items = lst["data"].get("data", []) if isinstance(lst["data"], dict) else []
|
||||
q = _normalize(name_or_hint)
|
||||
scored = []
|
||||
for ad in items:
|
||||
adname = ad.get("name") or ad.get("display-name") or ""
|
||||
s = similarity(q, adname)
|
||||
scored.append((s, adname))
|
||||
scored.sort(reverse=True, key=lambda x: x[0])
|
||||
if not scored:
|
||||
return {"status": "not_found", "candidates": []}
|
||||
best = scored[0]
|
||||
return {"status": "ok" if best[0] >= 0.6 else "ambiguous", "ad": scored[0][1], "candidates": [n for _, n in scored[:5]]}
|
||||
---
|
||||
|
||||
async def list_shapes(compartment_ocid: Optional[str] = None, ad: Optional[str] = None) -> Dict[str, Any]:
|
||||
"""List the shapes with `oci compute shape list --all` (needs compartment; AD is optional)."""
|
||||
cid = compartment_ocid or _tenancy_ocid()
|
||||
if not cid:
|
||||
return {"status": "error", "error": "Missing compartment OCID."}
|
||||
args = ["compute", "shape", "list", "--compartment-id", cid, "--all"]
|
||||
if ad:
|
||||
args += ["--availability-domain", ad]
|
||||
code, out, err = oci_cli.run(args)
|
||||
if code != 0:
|
||||
return {"status": "error", "stderr": err, "stdout": out}
|
||||
data = _safe_json(out)
|
||||
return {"status": "ok", "data": data.get("data", []) if isinstance(data, dict) else data}
|
||||
### 3. **MCP Tool Integration**
|
||||
|
||||
When the LLM detects a resolvable parameter, `webchat.py` does not fabricate OCIDs.
|
||||
Instead, it calls `server_mcp.py` tools:
|
||||
|
||||
```python
|
||||
tools = asyncio.get_event_loop().run_until_complete(load_tools())
|
||||
```
|
||||
|
||||
Returned candidates always contain **real OCIDs** from OCI:
|
||||
|
||||
```json
|
||||
{
|
||||
"candidates": {
|
||||
"image_id": [
|
||||
{ "index": 1, "name": "Oracle-Linux-9.6-2025.09.16-0", "ocid": "ocid1.image.oc1..aaaa...", "version": "2025.09.16", "score": 0.98 }
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4. **Schema A vs Schema B**
|
||||
|
||||
- **Schema A (resolving phase)**
|
||||
```json
|
||||
{
|
||||
"parameters": { "shape": null, "image_id": null, "display_name": "vm01" },
|
||||
"candidates": { "shape": [...], "image_id": [...] },
|
||||
"ask": "Please select a shape"
|
||||
}
|
||||
```
|
||||
|
||||
- **Schema B (final, ready to create)**
|
||||
```json
|
||||
{
|
||||
"compartmentId": "...",
|
||||
"subnetId": "...",
|
||||
"availabilityDomain": "...",
|
||||
"imageId": "...",
|
||||
"displayName": "vm01",
|
||||
"shape": "VM.Standard.E4.Flex",
|
||||
"shapeConfig": { "ocpus": 2, "memoryInGBs": 16 }
|
||||
}
|
||||
```
|
||||
|
||||
👉 Ensures that **creation is only triggered when all parameters are fully resolved**.
|
||||
|
||||
---
|
||||
|
||||
📌 **Summary:**
|
||||
- `server_mcp.py` = tool provider.
|
||||
- `webchat.py` = orchestrator with complex prompt, state, and LLM integration.
|
||||
- Prompt enforces deterministic JSON outputs.
|
||||
- Architecture supports **multi-turn resolution** with candidates and safe fallback.
|
||||
|
||||
|
||||
### `server_mcp.py`
|
||||
```python
|
||||
@mcp.tool()
|
||||
async def resolve_shape(hint: str, compartment_ocid: Optional[str] = None, ad: Optional[str] = None) -> Dict[str, Any]:
|
||||
"""Resolve shape informing a name 'e4' → find all shapes have e4 like 'VM.Standard.E4.Flex'."""
|
||||
"""Resolve a shape by hint like 'e4' → best match 'VM.Standard.E4.Flex'."""
|
||||
lst = await list_shapes(compartment_ocid=compartment_ocid, ad=ad)
|
||||
if lst.get("status") != "ok":
|
||||
return lst
|
||||
@@ -526,7 +339,7 @@ async def resolve_shape(hint: str, compartment_ocid: Optional[str] = None, ad: O
|
||||
for s in items:
|
||||
name = s.get("shape") or ""
|
||||
s1 = similarity(q, name)
|
||||
# bônus para begins-with no sufixo da família
|
||||
# bonus if hint matches family prefix
|
||||
fam = _normalize(name.replace("VM.Standard.", ""))
|
||||
s1 += 0.2 if fam.startswith(q) or q in fam else 0
|
||||
scored.append((s1, name))
|
||||
@@ -535,116 +348,27 @@ async def resolve_shape(hint: str, compartment_ocid: Optional[str] = None, ad: O
|
||||
return {"status": "not_found", "candidates": []}
|
||||
best = scored[0]
|
||||
return {"status": "ok" if best[0] >= 0.6 else "ambiguous", "shape": best[1], "candidates": [n for _, n in scored[:5]]}
|
||||
```
|
||||
🔹 This function uses similarity scoring to match user input with available shapes.
|
||||
🔹 Returns either the best match or a candidate list.
|
||||
|
||||
async def list_images(compartment_ocid: Optional[str] = None,
|
||||
operating_system: Optional[str] = None,
|
||||
operating_system_version: Optional[str] = None,
|
||||
shape: Optional[str] = None) -> Dict[str, Any]:
|
||||
"""Find the image by a short name or similarity"""
|
||||
cid = compartment_ocid or _tenancy_ocid()
|
||||
if not cid:
|
||||
return {"status": "error", "error": "Missing compartment OCID."}
|
||||
args = ["compute", "image", "list", "--compartment-id", cid, "--all"]
|
||||
if operating_system:
|
||||
args += ["--operating-system", operating_system]
|
||||
if operating_system_version:
|
||||
args += ["--operating-system-version", operating_system_version]
|
||||
if shape:
|
||||
args += ["--shape", shape]
|
||||
code, out, err = oci_cli.run(args)
|
||||
if code != 0:
|
||||
return {"status": "error", "stderr": err, "stdout": out}
|
||||
data = _safe_json(out)
|
||||
items = data.get("data", []) if isinstance(data, dict) else []
|
||||
return {"status": "ok", "data": items}
|
||||
|
||||
@mcp.tool()
|
||||
async def resolve_image(query: str,
|
||||
compartment_ocid: Optional[str] = None,
|
||||
shape: Optional[str] = None) -> Dict[str, Any]:
|
||||
"""Find the image by a short name or similarity"""
|
||||
# heuristic
|
||||
q = query.strip()
|
||||
os_name, os_ver = None, None
|
||||
# examples: "Oracle Linux 9", "OracleLinux 9", "OL9"
|
||||
if "linux" in q.lower():
|
||||
os_name = "Oracle Linux"
|
||||
m = re.search(r"(?:^|\\D)(\\d{1,2})(?:\\D|$)", q)
|
||||
if m:
|
||||
os_ver = m.group(1)
|
||||
|
||||
# Filter for version
|
||||
lst = await list_images(compartment_ocid=compartment_ocid, operating_system=os_name, operating_system_version=os_ver)
|
||||
if lst.get("status") != "ok":
|
||||
return lst
|
||||
items = lst["data"]
|
||||
if not items:
|
||||
# fallback: sem filtro, listar tudo e fazer fuzzy no display-name
|
||||
lst = await list_images(compartment_ocid=compartment_ocid)
|
||||
if lst.get("status") != "ok":
|
||||
return lst
|
||||
items = lst["data"]
|
||||
|
||||
# ranking for display-name and creation date
|
||||
ranked = []
|
||||
for img in items:
|
||||
dn = img.get("display-name","")
|
||||
s = similarity(query, dn)
|
||||
ts = img.get("time-created") or img.get("time_created") or ""
|
||||
ranked.append((s, ts, img))
|
||||
ranked.sort(key=lambda x: (x[0], x[1]), reverse=True)
|
||||
|
||||
if not ranked:
|
||||
return {"status": "not_found", "candidates": []}
|
||||
|
||||
best = ranked[0][2]
|
||||
# top-5 candidates
|
||||
cands = []
|
||||
for s, _, img in ranked[:5]:
|
||||
cands.append({"name": img.get("display-name"), "ocid": img["id"], "score": round(float(s), 4)})
|
||||
|
||||
status = "ok" if cands and cands[0]["score"] >= 0.65 else "ambiguous"
|
||||
return {"status": status, "resource": cands[0] if cands else None, "candidates": cands}
|
||||
|
||||
def _norm(s: str) -> str:
|
||||
return _normalize(s)
|
||||
|
||||
@mcp.tool()
|
||||
async def find_compartment(query_text: str) -> dict:
|
||||
"""
|
||||
Find compartment ocid by the name
|
||||
"""
|
||||
structured = f"query compartment resources where displayName =~ '.*{query_text}*.'"
|
||||
code, out, err = oci_cli.run(["search","resource","structured-search","--query-text", structured])
|
||||
if code != 0:
|
||||
return {"status":"error","stderr": err, "stdout": out}
|
||||
data = json.loads(out)
|
||||
items = data.get("data",{}).get("items",[])
|
||||
return {"status":"ok","data": items}
|
||||
---
|
||||
|
||||
### `create_compute_instance`
|
||||
```python
|
||||
@mcp.tool()
|
||||
async def create_compute_instance(
|
||||
compartment_ocid: Optional[str] = None,
|
||||
subnet_ocid: Optional[str] = None,
|
||||
availability_domain: Optional[str] = None,
|
||||
shape: Optional[str] = None,
|
||||
ocpus: Optional[int] = None, # Inteiro opcional
|
||||
memory: Optional[int] = None, # Inteiro opcional
|
||||
ocpus: Optional[int] = None,
|
||||
memory: Optional[int] = None,
|
||||
image_ocid: Optional[str] = None,
|
||||
display_name: Optional[str] = None,
|
||||
ssh_authorized_keys_path: Optional[str] = None,
|
||||
extra_args: Optional[List[str]] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""Create an OCI Compute instance via `oci` CLI. Missing parameters should be asked upstream by the agent.
|
||||
## Example of expected parameters to create a compute instance: ##
|
||||
compartment-id: ocid1.compartment.oc1..aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
|
||||
subnet-id: ocid1.subnet.oc1.sa-saopaulo-1.aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
|
||||
shape: VM.Standard.E4.Flex
|
||||
availability-domain: IAfA:SA-SAOPAULO-1-AD-1
|
||||
image-id: ocid1.image.oc1.sa-saopaulo-1.aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
|
||||
display-name: teste_hoshikawa
|
||||
shape-config: '{"ocpus": 2, "memoryInGBs": 16}'
|
||||
"""
|
||||
args = [
|
||||
"compute", "instance", "launch",
|
||||
"--compartment-id", compartment_ocid or "",
|
||||
@@ -653,20 +377,14 @@ async def create_compute_instance(
|
||||
"--shape-config", json.dumps({"ocpus": ocpus, "memoryInGBs": memory}),
|
||||
"--availability-domain", availability_domain or "",
|
||||
"--image-id", image_ocid or "",
|
||||
#"--source-details", json.dumps({"sourceType": "image", "imageId": image_ocid or ""}),
|
||||
]
|
||||
if display_name:
|
||||
args += ["--display-name", display_name]
|
||||
if ssh_authorized_keys_path:
|
||||
args += ["--metadata", json.dumps({"ssh_authorized_keys": open(ssh_authorized_keys_path, "r", encoding="utf-8").read()})]
|
||||
args += ["--metadata", json.dumps({"ssh_authorized_keys": open(ssh_authorized_keys_path, "r").read()})]
|
||||
if extra_args:
|
||||
args += extra_args
|
||||
|
||||
# validate basics
|
||||
for flag in ["--compartment-id", "--subnet-id", "--shape", "--availability-domain"]:
|
||||
if "" in [args[args.index(flag)+1]]:
|
||||
return {"status": "error", "error": f"Missing required {flag} value"}
|
||||
|
||||
code, out, err = oci_cli.run(args)
|
||||
if code != 0:
|
||||
return {"status": "error", "error": err.strip(), "stdout": out}
|
||||
@@ -675,61 +393,56 @@ async def create_compute_instance(
|
||||
except Exception:
|
||||
payload = {"raw": out}
|
||||
return {"status": "ok", "oci_result": payload}
|
||||
```
|
||||
🔹 Wraps OCI CLI to launch an instance.
|
||||
🔹 Ensures `ocpus` and `memoryInGBs` are packaged under `--shape-config`.
|
||||
🔹 Returns full OCI CLI result or error details.
|
||||
|
||||
@mcp.tool()
|
||||
async def oci_cli_passthrough(raw: str) -> Dict[str, Any]:
|
||||
"""Run an arbitrary `oci` CLI command (single string). Example: "network vcn list --compartment-id ocid1..."""
|
||||
args = shlex.split(raw)
|
||||
code, out, err = oci_cli.run(args)
|
||||
result = {"returncode": code, "stdout": out, "stderr": err}
|
||||
# try JSON parse
|
||||
try:
|
||||
result["json"] = json.loads(out)
|
||||
except Exception:
|
||||
pass
|
||||
return result
|
||||
---
|
||||
|
||||
## ▶️ How to Run
|
||||
|
||||
1. Start the MCP server (ONLY IF YOU NEED TO INTEGRATE WITH OTHER PROCESSES):
|
||||
>**Note:** Just execute the server if you want to integrate with other processes. In this tutorial, you don't need to execute it. The webchat.py call the server_mcp.py remotely.
|
||||
```bash
|
||||
python server_mcp.py
|
||||
```
|
||||
|
||||
Finally, it includes the script’s entrypoint.
|
||||
When run directly, the MCP server starts with stdio transport, ready to be launched by a client (like the LangChain agent).
|
||||
|
||||
```python
|
||||
if __name__ == "__main__":
|
||||
# Start FastMCP server (stdio by default). A host (your agent/IDE) should launch this.
|
||||
mcp.run(transport="stdio")
|
||||
|
||||
2. Start the webchat UI:
|
||||
```bash
|
||||
python webchat.py --device=cuda
|
||||
```
|
||||
|
||||
## Test the Code
|
||||
|
||||
Config the parameters on the file [config](./files/config)
|
||||
|
||||
```json
|
||||
{
|
||||
"oci_profile": "DEFAULT",
|
||||
"compartment_id": "ocid1.compartment.oc1..aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
|
||||
"llm_endpoint": "https://inference.generativeai.us-chicago-1.oci.oraclecloud.com",
|
||||
"OCI_CLI_BIN": "/<path for your oci executable>/oci"
|
||||
}
|
||||
3. Open in browser:
|
||||
```
|
||||
http://localhost:8080
|
||||
```
|
||||
|
||||
Run the code with:
|
||||
---
|
||||
|
||||
python agent_over_mcp.py
|
||||
## 💡 Example Usage
|
||||
|
||||

|
||||
- **Create VM**:
|
||||
```
|
||||
create a VM called test_hoshikawa in compartment cristiano.hoshikawa,
|
||||
availability domain SA-SAOPAULO-1-AD-1,
|
||||
subnet "public subnet-vcn" in compartment xpto,
|
||||
shape VM.Standard.E4.Flex,
|
||||
image Oracle Linux 9,
|
||||
with 2 OCPUs and 16 GB memory
|
||||
```
|
||||
|
||||
And see in OCI Console:
|
||||
Agent response (Schema A or B depending on resolution).
|
||||
|
||||

|
||||
---
|
||||
|
||||
## Reference
|
||||
## 🐞 Troubleshooting
|
||||
|
||||
- [Installing the CLI](https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm)
|
||||
- [Build an AI Agent with Multi-Agent Communication Protocol Server for Invoice Resolution](https://docs.oracle.com/en/learn/oci-aiagent-mcp-server)
|
||||
- [Develop a Simple AI Agent Tool using Oracle Cloud Infrastructure Generative AI and REST APIs](https://docs.oracle.com/en/learn/oci-agent-ai/)
|
||||
- [LangChain MCP Adapters](https://github.com/langchain-ai/langchain-mcp-adapters)
|
||||
- **Error: too many tokens** → Simplify input request, avoid unnecessary repetition.
|
||||
- **Missing shapeConfig** → Ensure both `ocpus` and `memoryInGBs` are extracted and passed.
|
||||
- **Authorization errors** → Validate your OCI CLI config and IAM permissions.
|
||||
|
||||
## Acknowledgments
|
||||
---
|
||||
|
||||
- **Author** - Cristiano Hoshikawa (Oracle LAD A-Team Solution Engineer)
|
||||
## 📜 License
|
||||
MIT License
|
||||
|
||||
@@ -1,115 +0,0 @@
|
||||
import sys
|
||||
import os
|
||||
import json
|
||||
import asyncio
|
||||
|
||||
from langchain_core.prompts import ChatPromptTemplate
|
||||
from langchain_community.chat_models.oci_generative_ai import ChatOCIGenAI
|
||||
from langgraph.prebuilt import create_react_agent
|
||||
from langchain_core.messages import HumanMessage, AIMessage
|
||||
from langchain_mcp_adapters.client import MultiServerMCPClient
|
||||
|
||||
# Configuration File
|
||||
with open("./config", "r") as f:
|
||||
config_data = json.load(f)
|
||||
|
||||
# Memory Management for the OCI Resource Parameters
|
||||
class MemoryState:
|
||||
def __init__(self):
|
||||
self.messages = []
|
||||
|
||||
# Define the language model
|
||||
llm = ChatOCIGenAI(
|
||||
model_id="cohere.command-r-08-2024",
|
||||
service_endpoint=config_data["llm_endpoint"],
|
||||
compartment_id=config_data["compartment_id"],
|
||||
auth_profile=config_data["oci_profile"],
|
||||
model_kwargs={"temperature": 0.1, "top_p": 0.75, "max_tokens": 2000}
|
||||
)
|
||||
|
||||
# Prompt
|
||||
prompt = ChatPromptTemplate.from_messages([
|
||||
("system", """
|
||||
You are an OCI Operations Agent with access to MCP tools (server `oci-ops`).
|
||||
Your goal is to provision and manage OCI resources **without requiring the user to know OCIDs**.
|
||||
|
||||
INTERACTION RULES:
|
||||
1) Wait until the user ask to create a resource
|
||||
2) If all the parameters has the ocid information, create the resource
|
||||
3) If all the parameters were filled by the user, create the resource
|
||||
4) If a parameter given is a name and needs to be converted to a OCID, search for it automatically
|
||||
5) If a parameter is missing, ask for the information
|
||||
6) Do not wait for a response from creation. Inform "Creation of resource is Done."
|
||||
|
||||
IMPORTANT RULES:
|
||||
1) Never invent OCIDs. Prefer to ask succinct follow-ups.
|
||||
2) Prefer to reuse defaults from memory when appropriate
|
||||
|
||||
OUTPUT STYLE:
|
||||
- Questions: short, one parameter at a time.
|
||||
- Show: mini-summary with final values.
|
||||
- Candidate lists: numbered, with name (type) — ocid — score when available.
|
||||
"""),
|
||||
("placeholder", "{messages}")
|
||||
])
|
||||
|
||||
# Run the client with the MCP server
|
||||
async def main():
|
||||
client = MultiServerMCPClient(
|
||||
{
|
||||
"oci-ops": {
|
||||
"command": sys.executable,
|
||||
"args": ["server_mcp.py"],
|
||||
"transport": "stdio",
|
||||
"env": {
|
||||
"PATH": os.environ.get("PATH", "") + os.pathsep + os.path.expanduser("~/.local/bin"),
|
||||
"OCI_CLI_BIN": config_data["OCI_CLI_BIN"],
|
||||
"OCI_CLI_PROFILE": config_data["oci_profile"],
|
||||
},
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
tools = await client.get_tools()
|
||||
if not tools:
|
||||
print("❌ No MCP tools were loaded. Please check if the server is running.")
|
||||
return
|
||||
|
||||
print("🛠️ Loaded tools:", [t.name for t in tools])
|
||||
|
||||
# Creating the LangGraph agent with in-memory state
|
||||
memory_state = MemoryState()
|
||||
memory_state.messages = []
|
||||
|
||||
agent_executor = create_react_agent(
|
||||
model=llm,
|
||||
tools=tools,
|
||||
prompt=prompt,
|
||||
)
|
||||
|
||||
print("🤖 READY")
|
||||
while True:
|
||||
query = input("You: ")
|
||||
if query.lower() in ["quit", "exit"]:
|
||||
break
|
||||
if not query.strip():
|
||||
continue
|
||||
|
||||
memory_state.messages.append(HumanMessage(content=query))
|
||||
try:
|
||||
result = await agent_executor.ainvoke({"messages": memory_state.messages})
|
||||
new_messages = result.get("messages", [])
|
||||
|
||||
# Store new messages
|
||||
memory_state.messages.extend(new_messages)
|
||||
|
||||
print("Assist:", new_messages[-1].content)
|
||||
|
||||
formatted_messages = prompt.format_messages()
|
||||
|
||||
except Exception as e:
|
||||
print("Error:", e)
|
||||
|
||||
# Run the agent with asyncio
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
@@ -2,23 +2,37 @@
|
||||
# ==============================================
|
||||
# server_mcp.py — MCP Server (FastMCP) for OCI
|
||||
# ==============================================
|
||||
# Features
|
||||
# - SQLite database storing OCI resource OCIDs (name, type, ocid, compartment, tags)
|
||||
# - Phonetic + fuzzy search (accent-insensitive Soundex + difflib fallback)
|
||||
# - Tools to: add/update/list/search resources; resolve name→OCID; simple memory KV store
|
||||
# - Tool to create OCI resources via `oci` CLI (VM example + generic passthrough)
|
||||
# - Designed for MCP hosts; start with: `python server_mcp.py`
|
||||
# --------------------------------------------------------------
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import shlex
|
||||
import sqlite3
|
||||
import subprocess
|
||||
import sys
|
||||
import unicodedata
|
||||
from datetime import datetime
|
||||
from typing import Any, Dict, List, Optional, Tuple
|
||||
import os
|
||||
import json
|
||||
import configparser
|
||||
from mcp.server.fastmcp import FastMCP
|
||||
|
||||
# Config File
|
||||
from mcp.server.fastmcp import FastMCP
|
||||
import shutil
|
||||
import configparser, os, json
|
||||
|
||||
import oracledb
|
||||
import json
|
||||
import oci
|
||||
import configparser
|
||||
|
||||
with open("./config", "r") as f:
|
||||
config_data = json.load(f)
|
||||
|
||||
# FastMCP Server
|
||||
mcp = FastMCP("oci-ops")
|
||||
|
||||
# ------------------------------
|
||||
@@ -65,6 +79,8 @@ class OCI:
|
||||
oci_cli = OCI(profile=config_data["oci_profile"])
|
||||
|
||||
# -------- OCI config helpers --------
|
||||
import configparser
|
||||
|
||||
def _read_oci_config(profile: Optional[str]) -> Dict[str, str]:
|
||||
cfg_path = os.path.expanduser("~/.oci/config")
|
||||
cp = configparser.ConfigParser()
|
||||
@@ -144,7 +160,7 @@ async def find_subnet(query_text: str) -> dict:
|
||||
|
||||
@mcp.tool()
|
||||
async def list_availability_domains(compartment_ocid: Optional[str] = None) -> Dict[str, Any]:
|
||||
"""List ADs with `oci iam availability-domain list`."""
|
||||
"""Lista ADs via `oci iam availability-domain list`."""
|
||||
cid = compartment_ocid or _tenancy_ocid()
|
||||
if not cid:
|
||||
return {"status": "error", "error": "Missing tenancy compartment OCID."}
|
||||
@@ -172,8 +188,8 @@ async def find_ad(name_or_hint: str, compartment_ocid: Optional[str] = None) ->
|
||||
best = scored[0]
|
||||
return {"status": "ok" if best[0] >= 0.6 else "ambiguous", "ad": scored[0][1], "candidates": [n for _, n in scored[:5]]}
|
||||
|
||||
async def list_shapes(compartment_ocid: Optional[str] = None, ad: Optional[str] = None) -> Dict[str, Any]:
|
||||
"""List the shapes with `oci compute shape list --all` (needs compartment; AD is optional)."""
|
||||
async def _list_shapes_from_oci(compartment_ocid: Optional[str] = None, ad: Optional[str] = None) -> Dict[str, Any]:
|
||||
"""Lista shapes via `oci compute shape list --all` (precisa compartment; AD melhora a lista)."""
|
||||
cid = compartment_ocid or _tenancy_ocid()
|
||||
if not cid:
|
||||
return {"status": "error", "error": "Missing compartment OCID."}
|
||||
@@ -188,8 +204,8 @@ async def list_shapes(compartment_ocid: Optional[str] = None, ad: Optional[str]
|
||||
|
||||
@mcp.tool()
|
||||
async def resolve_shape(hint: str, compartment_ocid: Optional[str] = None, ad: Optional[str] = None) -> Dict[str, Any]:
|
||||
"""Resolve shape informing a name 'e4' → find all shapes have e4 like 'VM.Standard.E4.Flex'."""
|
||||
lst = await list_shapes(compartment_ocid=compartment_ocid, ad=ad)
|
||||
"""Resolves shape by hint like 'e4' → best match type 'VM.Standard.E4.Flex'."""
|
||||
lst = await _list_shapes_from_oci(compartment_ocid=compartment_ocid, ad=ad)
|
||||
if lst.get("status") != "ok":
|
||||
return lst
|
||||
items = lst["data"]
|
||||
@@ -198,7 +214,6 @@ async def resolve_shape(hint: str, compartment_ocid: Optional[str] = None, ad: O
|
||||
for s in items:
|
||||
name = s.get("shape") or ""
|
||||
s1 = similarity(q, name)
|
||||
# bônus para begins-with no sufixo da família
|
||||
fam = _normalize(name.replace("VM.Standard.", ""))
|
||||
s1 += 0.2 if fam.startswith(q) or q in fam else 0
|
||||
scored.append((s1, name))
|
||||
@@ -208,6 +223,19 @@ async def resolve_shape(hint: str, compartment_ocid: Optional[str] = None, ad: O
|
||||
best = scored[0]
|
||||
return {"status": "ok" if best[0] >= 0.6 else "ambiguous", "shape": best[1], "candidates": [n for _, n in scored[:5]]}
|
||||
|
||||
@mcp.tool()
|
||||
async def list_shapes(compartment_ocid: Optional[str] = None, ad: Optional[str] = None) -> Dict[str, Any]:
|
||||
"""
|
||||
List all available compute shapes in the given compartment/availability domain.
|
||||
"""
|
||||
lst = await _list_shapes_from_oci(compartment_ocid=compartment_ocid, ad=ad)
|
||||
if lst.get("status") != "ok":
|
||||
return lst
|
||||
|
||||
items = lst["data"]
|
||||
shapes = [{"shape": s.get("shape"), "ocpus": s.get("ocpus"), "memory": s.get("memoryInGBs")} for s in items]
|
||||
return {"status": "ok", "data": shapes}
|
||||
|
||||
async def list_images(compartment_ocid: Optional[str] = None,
|
||||
operating_system: Optional[str] = None,
|
||||
operating_system_version: Optional[str] = None,
|
||||
@@ -235,29 +263,26 @@ async def resolve_image(query: str,
|
||||
compartment_ocid: Optional[str] = None,
|
||||
shape: Optional[str] = None) -> Dict[str, Any]:
|
||||
"""Find the image by a short name or similarity"""
|
||||
# heuristic
|
||||
q = query.strip()
|
||||
os_name, os_ver = None, None
|
||||
# examples: "Oracle Linux 9", "OracleLinux 9", "OL9"
|
||||
if "linux" in q.lower():
|
||||
os_name = "Oracle Linux"
|
||||
m = re.search(r"(?:^|\\D)(\\d{1,2})(?:\\D|$)", q)
|
||||
if m:
|
||||
os_ver = m.group(1)
|
||||
|
||||
# Filter for version
|
||||
lst = await list_images(compartment_ocid=compartment_ocid, operating_system=os_name, operating_system_version=os_ver)
|
||||
if lst.get("status") != "ok":
|
||||
return lst
|
||||
items = lst["data"]
|
||||
if not items:
|
||||
# fallback: sem filtro, listar tudo e fazer fuzzy no display-name
|
||||
# fallback: no filter, list all and make fuzzy on display-name
|
||||
lst = await list_images(compartment_ocid=compartment_ocid)
|
||||
if lst.get("status") != "ok":
|
||||
return lst
|
||||
items = lst["data"]
|
||||
|
||||
# ranking for display-name and creation date
|
||||
# rank by similarity of display-name and creation date
|
||||
ranked = []
|
||||
for img in items:
|
||||
dn = img.get("display-name","")
|
||||
@@ -284,7 +309,7 @@ def _norm(s: str) -> str:
|
||||
@mcp.tool()
|
||||
async def find_compartment(query_text: str) -> dict:
|
||||
"""
|
||||
Find compartment ocid by the name, the compartment ocid is the identifier field
|
||||
Find compartment ocid by the name, the compartment ocid is the identifier field
|
||||
"""
|
||||
structured = f"query compartment resources where displayName =~ '.*{query_text}*.'"
|
||||
code, out, err = oci_cli.run(["search","resource","structured-search","--query-text", structured])
|
||||
@@ -307,45 +332,62 @@ async def create_compute_instance(
|
||||
ssh_authorized_keys_path: Optional[str] = None,
|
||||
extra_args: Optional[List[str]] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""Create an OCI Compute instance via `oci` CLI. Missing parameters should be asked upstream by the agent.
|
||||
## Example of expected parameters to create a compute instance: ##
|
||||
compartment-id: ocid1.compartment.oc1..aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
|
||||
subnet-id: ocid1.subnet.oc1.sa-saopaulo-1.aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
|
||||
shape: VM.Standard.E4.Flex
|
||||
availability-domain: IAfA:SA-SAOPAULO-1-AD-1
|
||||
image-id: ocid1.image.oc1.sa-saopaulo-1.aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
|
||||
display-name: teste_hoshikawa
|
||||
shape-config: '{"ocpus": 2, "memoryInGBs": 16}'
|
||||
"""
|
||||
Create an OCI Compute instance via `oci` CLI.
|
||||
Missing parameters should be asked upstream by the agent.
|
||||
|
||||
Example:
|
||||
compartment_id: ocid1.compartment.oc1..aaaa...
|
||||
subnet_id: ocid1.subnet.oc1.sa-saopaulo-1.aaaa...
|
||||
shape: VM.Standard.E4.Flex
|
||||
availability_domain: IAfA:SA-SAOPAULO-1-AD-1
|
||||
image_id: ocid1.image.oc1.sa-saopaulo-1.aaaa...
|
||||
display_name: teste_hoshikawa
|
||||
shape-config: {"ocpus": 2, "memoryInGBs": 16}
|
||||
"""
|
||||
|
||||
# mount shape-config automatically
|
||||
shape_config = None
|
||||
if ocpus is not None and memory is not None:
|
||||
shape_config = json.dumps({"ocpus": ocpus, "memoryInGBs": memory})
|
||||
|
||||
args = [
|
||||
"compute", "instance", "launch",
|
||||
"--compartment-id", compartment_ocid or "",
|
||||
"--subnet-id", subnet_ocid or "",
|
||||
"--shape", shape or "",
|
||||
"--shape-config", json.dumps({"ocpus": ocpus, "memoryInGBs": memory}),
|
||||
"--availability-domain", availability_domain or "",
|
||||
"--image-id", image_ocid or "",
|
||||
#"--source-details", json.dumps({"sourceType": "image", "imageId": image_ocid or ""}),
|
||||
]
|
||||
]
|
||||
|
||||
if shape_config:
|
||||
args += ["--shape-config", shape_config]
|
||||
|
||||
if display_name:
|
||||
args += ["--display-name", display_name]
|
||||
|
||||
if ssh_authorized_keys_path:
|
||||
args += ["--metadata", json.dumps({"ssh_authorized_keys": open(ssh_authorized_keys_path, "r", encoding="utf-8").read()})]
|
||||
args += ["--metadata", json.dumps({
|
||||
"ssh_authorized_keys": open(ssh_authorized_keys_path, "r", encoding="utf-8").read()
|
||||
})]
|
||||
|
||||
if extra_args:
|
||||
args += extra_args
|
||||
|
||||
# validate basics
|
||||
for flag in ["--compartment-id", "--subnet-id", "--shape", "--availability-domain"]:
|
||||
if "" in [args[args.index(flag)+1]]:
|
||||
# validação mínima
|
||||
for flag in ["--compartment-id", "--subnet-id", "--shape", "--availability-domain", "--image-id"]:
|
||||
if "" in [args[args.index(flag) + 1]]:
|
||||
return {"status": "error", "error": f"Missing required {flag} value"}
|
||||
|
||||
code, out, err = oci_cli.run(args)
|
||||
if code != 0:
|
||||
return {"status": "error", "error": err.strip(), "stdout": out}
|
||||
|
||||
try:
|
||||
payload = json.loads(out)
|
||||
except Exception:
|
||||
payload = {"raw": out}
|
||||
|
||||
return {"status": "ok", "oci_result": payload}
|
||||
|
||||
@mcp.tool()
|
||||
|
||||
274
files/webchat.py
274
files/webchat.py
@@ -123,6 +123,20 @@ def check_truncation(response: dict):
|
||||
pass
|
||||
return False
|
||||
|
||||
def reset_state():
|
||||
memory_state.messages = []
|
||||
memory_state.parameters = {
|
||||
"compartment_id": None,
|
||||
"subnet_id": None,
|
||||
"availability_domain": None,
|
||||
"image_id": None,
|
||||
"shape": None,
|
||||
"ocpus": None,
|
||||
"memoryInGBs": None,
|
||||
"display_name": None
|
||||
}
|
||||
memory_state.candidates = {}
|
||||
|
||||
# ----------------------------
|
||||
# LLM
|
||||
# ----------------------------
|
||||
@@ -131,73 +145,182 @@ llm = ChatOCIGenAI(
|
||||
service_endpoint=config_data["llm_endpoint"],
|
||||
compartment_id=config_data["compartment_id"],
|
||||
auth_profile=config_data["oci_profile"],
|
||||
model_kwargs={"temperature": 0.1, "top_p": 0.75, "max_tokens": 4000}
|
||||
model_kwargs={"temperature": 0.0, "top_p": 0.0, "max_tokens": 4000}
|
||||
)
|
||||
|
||||
# ----------------------------
|
||||
# PROMPT
|
||||
# ----------------------------
|
||||
|
||||
system_text = """
|
||||
system_text = r"""
|
||||
You are an **OCI Operations Agent** with access to MCP tools (server `oci-ops`).
|
||||
Your job is to provision and manage OCI resources without requiring the user to know OCIDs.
|
||||
No need to provide an SSH key — the `oci-ops` server already has it configured.
|
||||
|
||||
====================
|
||||
## TOOLS
|
||||
- `create_compute_instance` → Create a new Compute instance
|
||||
- `resolve_image` / `list_images` → Resolve or list images
|
||||
- `resolve_shape` / `list_shapes` → Resolve or list shapes
|
||||
- `find_subnet` → Find subnet
|
||||
- `find_compartment` → Find compartment
|
||||
- `find_ad` / `list_availability_domains` → Resolve or list availability domains
|
||||
- `oci_cli_passthrough` → Run raw OCI CLI (expert use only)
|
||||
## PARAMETER TYPES
|
||||
There are TWO categories of parameters:
|
||||
|
||||
### 1. Literal parameters (must always be extracted directly from user text, never candidates):
|
||||
- display_name
|
||||
- ocpus
|
||||
- memoryInGBs
|
||||
Rules:
|
||||
- Extract display_name from phrases like "vm chamada X", "nome X", "VM X".
|
||||
- Extract ocpus from numbers followed by "ocpus", "OCPUs", "cores", "vCPUs".
|
||||
- Extract memoryInGBs from numbers followed by "GB", "gigabytes", "giga".
|
||||
- These values must NEVER be null if present in the user request.
|
||||
- These values must NEVER go into "candidates".
|
||||
|
||||
### 2. Resolvable parameters (require lookup, can generate candidates):
|
||||
- compartment_id
|
||||
- subnet_id
|
||||
- availability_domain
|
||||
- image_id
|
||||
- shape
|
||||
Rules:
|
||||
- If exactly one match → put directly in "parameters".
|
||||
- If multiple matches → list them in "candidates" for that field.
|
||||
- If no matches → leave null in "parameters" and add an "ask".
|
||||
- Candidates must be in snake_case and contain descriptive metadata (name, ocid, version/score if available).
|
||||
|
||||
====================
|
||||
## PIPELINE (MANDATORY)
|
||||
|
||||
### STEP 1 — Extract all values literally mentioned
|
||||
- Parse every candidate value directly from the user request text.
|
||||
- Do not decide yet whether it is literal or resolvable.
|
||||
- Example: "create vm called test01 with 2 ocpus and 16 GB memory, image Oracle Linux 9" → extract:
|
||||
{{ "display_name": "test01", "ocpus": 2, "memoryInGBs": 16, "image": "Oracle Linux 9" }}
|
||||
|
||||
### STEP 2 — Classify values into:
|
||||
- **Literal parameters (always final, never candidates):**
|
||||
- display_name
|
||||
- ocpus
|
||||
- memoryInGBs
|
||||
- **Resolvable parameters (require OCID lookup or mapping):**
|
||||
- compartment_id
|
||||
- subnet_id
|
||||
- availability_domain
|
||||
- image_id
|
||||
- shape
|
||||
|
||||
====================
|
||||
## STEP 3 — Resolve resolvable parameters
|
||||
- For each resolvable parameter (compartment_id, subnet_id, availability_domain, image_id, shape):
|
||||
- If exactly one match is found → assign directly in "parameters".
|
||||
- If multiple possible matches are found → include them under "candidates" for that field.
|
||||
- If no matches are found → add a concise "ask".
|
||||
|
||||
====================
|
||||
## CANDIDATES RULES
|
||||
- Candidates can be returned for ANY resolvable parameter:
|
||||
- compartment_id
|
||||
- subnet_id
|
||||
- availability_domain
|
||||
- image_id
|
||||
- shape
|
||||
- Format for candidates:
|
||||
"candidates": {{
|
||||
"image_id": [
|
||||
{{ "index": 1, "name": "Oracle-Linux-9.6-2025.09.16-0", "ocid": "ocid1.image.oc1....", "version": "2025.09.16", "score": 0.98 }},
|
||||
{{ "index": 2, "name": "Oracle-Linux-9.6-2025.08.31-0", "ocid": "ocid1.image.oc1....", "version": "2025.08.31", "score": 0.96 }}
|
||||
],
|
||||
"shape": [
|
||||
{{ "index": 1, "name": "VM.Standard.E4.Flex", "ocid": "ocid1.shape.oc1....", "score": 0.97 }},
|
||||
{{ "index": 2, "name": "VM.Standard.A1.Flex", "ocid": "ocid1.shape.oc1....", "score": 0.94 }}
|
||||
]
|
||||
}}
|
||||
- Do not include null values in candidates.
|
||||
- Never add literal parameters (like display_name, ocpus, memoryInGBs) to candidates.
|
||||
- Keys in candidates must always be snake_case.
|
||||
====================
|
||||
## CANDIDATES STRICT RULES
|
||||
|
||||
- Only generate "candidates" if there are MORE THAN ONE possible matches returned by a tool.
|
||||
- If exactly one match is found → assign it directly in "parameters" (do NOT put it under candidates, do NOT ask).
|
||||
- If zero matches are found → leave the parameter as null and add an "ask".
|
||||
- Never ask the user to select an option if only a single match exists.
|
||||
|
||||
- For any parameter explicitly given in the user request (e.g., shape "VM.Standard.E4.Flex"):
|
||||
- Do NOT generate candidates.
|
||||
- Assume that value as authoritative.
|
||||
- Only override with a candidate list if the tool fails to resolve it.
|
||||
- Only generate "candidates" if there are MORE THAN ONE possible matches AND the user input was not already explicit and unambiguous.
|
||||
- If the user explicitly specifies a resolvable parameter value (e.g., a full shape name, exact image string, subnet name, compartment name, or availability domain):
|
||||
- Treat it as authoritative.
|
||||
- Assign it directly to "parameters".
|
||||
- Do NOT generate candidates and do NOT ask for confirmation.
|
||||
- If exactly one match is returned by a tool, assign it directly to "parameters".
|
||||
- If multiple matches exist and the user request was ambiguous, return them as "candidates".
|
||||
- If no matches exist, leave the parameter as null and add an "ask".
|
||||
====================
|
||||
## CANDIDATE HANDLING
|
||||
|
||||
- Candidates are used ONLY for resolvable parameters (compartment_id, subnet_id, availability_domain, image_id, shape).
|
||||
- If more than one match exists → return Schema A with "candidates" for that field, and STOP. Do not also build Schema B in the same turn.
|
||||
- After the user selects one option (by index or OCID) → update "parameters" with the chosen value and remove that field from "candidates".
|
||||
- Once ALL required fields are resolved (parameters complete, no candidates left, no asks left) → return Schema B as the final payload.
|
||||
- Never present the same candidates more than once.
|
||||
- Never mix Schema A and Schema B in a single response.
|
||||
====================
|
||||
## TOOL USAGE AND CANDIDATES
|
||||
|
||||
- For every resolvable parameter (compartment_id, subnet_id, availability_domain, image_id, shape):
|
||||
- Always attempt to resolve using the proper MCP tool:
|
||||
* find_compartment → for compartment_id
|
||||
* find_subnet → for subnet_id
|
||||
* find_ad / list_availability_domains → for availability_domain
|
||||
* resolve_image / list_images → for image_id
|
||||
* resolve_shape / list_shapes → for shape
|
||||
- If the tool returns exactly one match → put the OCID directly in "parameters".
|
||||
- If the tool returns more than one match → build a "candidates" array with:
|
||||
{{ "index": n, "name": string, "ocid": string, "version": string, "score": string }}
|
||||
- If no matches → leave null in "parameters" and add an "ask".
|
||||
|
||||
- Candidates MUST always include the **real OCIDs** from tool output.
|
||||
- Never return plain names like "Oracle Linux 9" or "VM.Standard.E4.Flex" as candidates without the corresponding OCID.
|
||||
- Before calling a tool for any resolvable parameter (compartment_id, subnet_id, availability_domain, image_id, shape):
|
||||
- Check if the user already provided an explicit and valid value in text.
|
||||
- If yes → assign directly, skip candidates, skip further resolution.
|
||||
- If ambiguous (e.g., "Linux image" without version) → call tool, possibly return candidates.
|
||||
- If missing entirely → call tool and return ask if nothing is found.
|
||||
====================
|
||||
|
||||
## RULES
|
||||
- Parameters: compartment_id, subnet_id, availability_domain, image_id, shape, ocpus, memoryInGBs, display_name.
|
||||
- Use **snake_case** for parameters at all times.
|
||||
- Only when ALL parameters are resolved → build the `create_compute_instance` payload using **camelCase**.
|
||||
- If ambiguous (>1 results) → return in "candidates" with this format:
|
||||
- Always use snake_case for "parameters": compartment_id, subnet_id, availability_domain, image_id, shape, ocpus, memoryInGBs, display_name.
|
||||
- Only when calling `create_compute_instance`, convert to camelCase: compartmentId, subnetId, availabilityDomain, imageId, displayName, shape, shapeConfig.
|
||||
- Never mix snake_case and camelCase in the same JSON object.
|
||||
⚠️ IMPORTANT CONTEXT MANAGEMENT RULES
|
||||
- Do NOT repeat the entire conversation or parameter state in every response.
|
||||
- Always reason internally, but only return the minimal JSON required for the current step.
|
||||
- Never include past candidates again once they were shown. Keep them only in memory.
|
||||
- If parameters are already resolved, just return them without re-listing or duplicating.
|
||||
- Summarize long context internally. Do not expand or re-echo user instructions.
|
||||
- Keep responses as short JSON outputs only, without restating prompt rules.
|
||||
|
||||
"candidates": {{
|
||||
"image_id": [
|
||||
{{ "index": 1, "name": "Oracle-Linux-9.6-2025.09.16-0", "ocid": "ocid1.image.oc1....", "version": "2025.09.16", "score": 0.99 }},
|
||||
{{ "index": 2, "name": "Oracle-Linux-9.6-2025.08.31-0", "ocid": "ocid1.image.oc1....", "version": "2025.08.31", "score": 0.97 }}
|
||||
]
|
||||
}}
|
||||
====================
|
||||
|
||||
- Do not include null/None values in candidates.
|
||||
- If no matches → just return "ask".
|
||||
- If exactly one → assign directly in "parameters".
|
||||
### STEP 4 — Assemble JSON (Schema A if still resolving, Schema B if final)
|
||||
- Schema A (resolving phase):
|
||||
{{
|
||||
"parameters": {{ all snake_case keys }},
|
||||
"candidates": {{ only if ambiguity > 1 }},
|
||||
"ask": string (if still missing info)
|
||||
}}
|
||||
- Schema B (ready for creation):
|
||||
{{
|
||||
"compartmentId": string,
|
||||
"subnetId": string,
|
||||
"availabilityDomain": string,
|
||||
"imageId": string,
|
||||
"displayName": string,
|
||||
"shape": string,
|
||||
"shapeConfig": {{ "ocpus": number, "memoryInGBs": number }}
|
||||
}}
|
||||
|
||||
## OUTPUT CONTRACT
|
||||
- While resolving:
|
||||
{{
|
||||
"parameters": {{ ... }},
|
||||
"candidates": {{ ... }}, # only if ambiguous
|
||||
"ask": "..." # only if needed
|
||||
}}
|
||||
|
||||
- When all resolved:
|
||||
{{
|
||||
"compartmentId": "...",
|
||||
"subnetId": "...",
|
||||
"availabilityDomain": "...",
|
||||
"imageId": "...",
|
||||
"displayName": "...",
|
||||
"shape": "...",
|
||||
"shapeConfig": {{ "ocpus": <number>, "memoryInGBs": <number> }}
|
||||
}}
|
||||
|
||||
Then return:
|
||||
{{ "result": "✅ Creation of resource is Done." }}
|
||||
|
||||
⚠️ JSON must be strictly valid (RFC8259).
|
||||
No markdown, no comments, no truncation, no null placeholders.
|
||||
### STEP 5 — Output contract
|
||||
- Respond ONLY with one valid JSON object.
|
||||
- Never output markdown, comments, or explanations.
|
||||
- Never put literal parameters in "candidates".
|
||||
- Never leave literal parameters null if present in text.
|
||||
- Always use snake_case for Schema A and camelCase for Schema B.
|
||||
"""
|
||||
|
||||
prompt = ChatPromptTemplate.from_messages([
|
||||
@@ -252,13 +375,24 @@ def index():
|
||||
@app.route("/send", methods=["POST"])
|
||||
def send():
|
||||
user_message = request.form["message"]
|
||||
|
||||
if user_message.strip().lower() in ["reset", "newvm"]:
|
||||
reset_state()
|
||||
return Markup(
|
||||
f"<div class='message-user'>You: {user_message}</div>"
|
||||
f"<div class='message-bot'>Assistant: Status reset. You can start a new request.</div>"
|
||||
)
|
||||
|
||||
memory_state.messages.append(HumanMessage(content=user_message))
|
||||
user_html = f"<div class='message-user'>You: {user_message}</div>"
|
||||
|
||||
try:
|
||||
# injeta estado atual na conversa
|
||||
params_json = json.dumps({"parameters": memory_state.parameters}, indent=2)
|
||||
context_message = AIMessage(content=f"Current known parameters:\n{params_json}")
|
||||
state_block = json.dumps({
|
||||
"parameters": memory_state.parameters,
|
||||
"candidates": memory_state.candidates
|
||||
}, ensure_ascii=False)
|
||||
|
||||
context_message = AIMessage(content=f"Current known state:\n{state_block}")
|
||||
|
||||
result = asyncio.run(agent_executor.ainvoke({
|
||||
"messages": memory_state.messages + [context_message]
|
||||
@@ -280,46 +414,50 @@ def send():
|
||||
parsed = sanitize_json(assistant_reply)
|
||||
|
||||
if parsed and "parameters" in parsed:
|
||||
# atualiza parâmetros
|
||||
# 🔹 Smart merge: only overwrites if a non-null value came in
|
||||
for k, v in parsed["parameters"].items():
|
||||
if v is not None:
|
||||
if v not in (None, "null", ""):
|
||||
memory_state.parameters[k] = v
|
||||
|
||||
print("📌 Current status:", memory_state.parameters)
|
||||
|
||||
missing = validate_payload(memory_state.parameters)
|
||||
if not missing:
|
||||
print("✅ All parameters filled in. The agent should now create the VM..")
|
||||
else:
|
||||
print("⚠️ Faltando parâmetros:", missing)
|
||||
if not missing:
|
||||
debug_info += "\n✅ All parameters filled in. The agent should now create the VM.."
|
||||
else:
|
||||
debug_info += f"\n⚠️ Missing parameters: {missing}"
|
||||
|
||||
if missing:
|
||||
auto_followup = f"Please resolve the following missing parameters: {missing}"
|
||||
memory_state.messages.append(HumanMessage(content=auto_followup))
|
||||
# injeta um comando estruturado pedindo resolução
|
||||
cmd = json.dumps({
|
||||
"type": "resolve",
|
||||
"missing": missing,
|
||||
"hint": "Return Schema A JSON only."
|
||||
})
|
||||
memory_state.messages.append(HumanMessage(content=cmd))
|
||||
|
||||
# adiciona debug_info na resposta enviada ao navegador
|
||||
# adiciona debug_info à resposta
|
||||
assistant_reply += "\n\n" + debug_info
|
||||
|
||||
# se vieram candidatos
|
||||
# 🔹 Se vieram candidatos
|
||||
if parsed and "candidates" in parsed and parsed["candidates"]:
|
||||
memory_state.candidates = parsed["candidates"]
|
||||
print("🔍 Candidates found:", memory_state.candidates)
|
||||
|
||||
# monta bloco HTML de candidatos
|
||||
candidates_html = ""
|
||||
for param, items in memory_state.candidates.items():
|
||||
candidates_html += f"<b>Options for {param}:</b><br>"
|
||||
for c in items:
|
||||
line = f"{c.get('index')}. {c.get('name')} — {c.get('ocid')} — v{c.get('version')} — score {c.get('score')}"
|
||||
line = f"{c.get('index')}. {c.get('name')} — {c.get('ocid')} — v{c.get('version', '')} — score {c.get('score', '')}"
|
||||
candidates_html += line + "<br>"
|
||||
|
||||
ask_text = parsed.get("ask", "Choose an index or provide the OCID.")
|
||||
assistant_reply = f"{json.dumps({'parameters': memory_state.parameters}, ensure_ascii=False)}<br>{candidates_html}<i>{ask_text}</i>"
|
||||
|
||||
assistant_reply = (
|
||||
f"{json.dumps({'parameters': memory_state.parameters}, ensure_ascii=False)}"
|
||||
f"<br>{candidates_html}<i>{ask_text}</i>"
|
||||
)
|
||||
else:
|
||||
# 🔹 Se não houver candidatos, zera
|
||||
memory_state.candidates = {}
|
||||
|
||||
else:
|
||||
|
||||
Reference in New Issue
Block a user