mirror of
https://github.com/hoshikawa2/rfp_response_automation.git
synced 2026-03-03 16:09:35 +00:00
first commit
This commit is contained in:
37
README.md
37
README.md
@@ -64,15 +64,23 @@ Each question is parsed into a structured requirement:
|
||||
Facts are extracted **only when explicitly stated** in documentation and stored as graph triples:
|
||||
|
||||
```
|
||||
REQUIREMENT -[HAS_METRIC]-> messages per hour
|
||||
REQUIREMENT -[HAS_VALUE]-> < 1 hour
|
||||
REQUIREMENT -[SUPPORTED_BY]-> Document section
|
||||
SERVICE -[SUPPORTS_CAPABILITY]-> CAPABILITY
|
||||
SERVICE -[DOES_NOT_SUPPORT]-> CAPABILITY
|
||||
SERVICE -[HAS_LIMITATION]-> LIMITATION
|
||||
SERVICE -[HAS_SLA]-> SLA_VALUE
|
||||
```
|
||||
|
||||
There are three types of information:
|
||||
- What metric: HAS_METRIC
|
||||
- Value of metric: HAS_VALUE
|
||||
- Font of information: SUPPORTED_BY
|
||||
There are four types of structured relationships extracted explicitly from documentation:
|
||||
* Capability support: SERVICE -[SUPPORTS_CAPABILITY]-> CAPABILITY
|
||||
* Capability exclusion: SERVICE -[DOES_NOT_SUPPORT]-> CAPABILITY
|
||||
* Technical limitation: SERVICE -[HAS_LIMITATION]-> LIMITATION
|
||||
* Service level definition: SERVICE -[HAS_SLA]-> SLA_VALUE
|
||||
|
||||
Each relationship is:
|
||||
* Extracted strictly from explicit documentary evidence
|
||||
* Linked to a specific document chunk (CHUNK_HASH)
|
||||
* Associated with structured JSON node properties
|
||||
* Backed by an evidence table for full auditability
|
||||
|
||||
This ensures:
|
||||
- No hallucination
|
||||
@@ -178,7 +186,7 @@ POST /chat
|
||||
|
||||
This code implements a **GraphRAG-based pipeline focused on RFP (Request for Proposal) validation**, not generic Q&A.
|
||||
|
||||
>**Download** the code [graphrag_rerank.py](./files/graphrag_rerank.py)
|
||||
>**Download** here the [Source Code](./files/source_code.zip)
|
||||
|
||||
The main goal is to:
|
||||
- Extract **explicit, verifiable facts** from large PDF contracts and datasheets
|
||||
@@ -212,7 +220,7 @@ This represents a **strategic shift** from concept-based LLM answers to **compli
|
||||
- `REQUIREMENT -[HAS_VALUE]-> 1 hour`
|
||||
- Stored in Oracle Property Graph tables
|
||||
|
||||

|
||||

|
||||
|
||||
5. **RFP Requirement Parsing**
|
||||
- Each user question is converted into a structured requirement:
|
||||
@@ -294,8 +302,7 @@ FIrst of all, you need to run the code to prepare the Vector and Graph database.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
After the execution, the code will chat with you to test. You can give some questions like:
|
||||
|
||||
@@ -368,13 +375,13 @@ root
|
||||
|
||||
Call the http://localhost:8100 in your browser.
|
||||
|
||||

|
||||

|
||||
|
||||
There is also a REST Service implemented in the code, so you can automatize a RFP list calling item by item and obtain the responses you want: YES/NO
|
||||
|
||||
curl -X POST http://localhost:8100/chat \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"question": "What is the RTO of Oracle Application?"}'
|
||||
curl -X POST http://demo-orcl-api-ai.hoshikawa.com.br:8100/rest/chat \
|
||||
-H "Content-Type: application/json" -u app_user:app_password \
|
||||
-d '{ "question": "Does Oracle Cloud Infrastructure (OCI) Compute support online resizing of memory for running virtual machine instances?" }'
|
||||
|
||||
---
|
||||
|
||||
|
||||
295
README_COMPLETE_TUTORIAL.md
Normal file
295
README_COMPLETE_TUTORIAL.md
Normal file
@@ -0,0 +1,295 @@
|
||||
# 🧠 Oracle GraphRAG RFP AI -- Complete Tutorial
|
||||
|
||||
Enterprise-grade deterministic RFP validation engine built with:
|
||||
|
||||
- Oracle Autonomous Database 23ai
|
||||
- Oracle Property Graph
|
||||
- OCI Generative AI (LLMs + Embeddings)
|
||||
- FAISS Vector Search
|
||||
- Flask REST API
|
||||
- Hybrid Graph + Vector + JSON reasoning
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
# 📌 Introduction
|
||||
|
||||
This project implements a **deterministic RFP validation engine**.
|
||||
|
||||
Unlike traditional RAG systems that generate conceptual answers, this
|
||||
solution is designed to:
|
||||
|
||||
- Validate contractual and compliance requirements
|
||||
- Produce only: YES / NO / PARTIAL
|
||||
- Provide exact documentary evidence
|
||||
- Eliminate hallucination risk
|
||||
- Ensure full traceability
|
||||
|
||||
This tutorial walks through the full architecture and implementation.
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
# 🏗️ Full Architecture
|
||||
|
||||
PDF Documents
|
||||
└─► Semantic Chunking
|
||||
├─► FAISS Vector Index
|
||||
├─► LLM Triple Extraction
|
||||
│ └─► Oracle 23ai Property Graph
|
||||
│ ├─► Structured JSON Node Properties
|
||||
│ ├─► Edge Confidence Weights
|
||||
│ └─► Evidence Table
|
||||
└─► Hybrid Retrieval Layer
|
||||
├─► Vector Recall
|
||||
├─► Graph Filtering
|
||||
├─► Oracle Text
|
||||
└─► Graph-aware Reranking
|
||||
└─► Deterministic LLM Decision
|
||||
└─► REST Response
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
# 🧩 Step 1 -- Environment Setup
|
||||
|
||||
You need:
|
||||
|
||||
- Oracle Autonomous Database 23ai
|
||||
- OCI Generative AI enabled
|
||||
- Python 3.10+
|
||||
- FAISS installed
|
||||
- Oracle Python driver (`oracledb`)
|
||||
|
||||
Install dependencies:
|
||||
|
||||
pip install oracledb langchain faiss-cpu flask pypandoc
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
# 📄 Step 2 -- PDF Ingestion
|
||||
|
||||
- Load PDFs
|
||||
- Perform semantic chunking
|
||||
- Normalize headings and tables
|
||||
- Store chunk metadata including:
|
||||
- chunk_hash
|
||||
- source_url
|
||||
|
||||
Chunks feed both:
|
||||
|
||||
- FAISS
|
||||
- Graph extraction
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
# 🧠 Step 3 -- Triple Extraction (Graph Creation)
|
||||
|
||||
The function:
|
||||
|
||||
create_knowledge_graph(chunks)
|
||||
|
||||
Uses LLM to extract ONLY explicit relationships:
|
||||
|
||||
SERVICE -[SUPPORTS_CAPABILITY]-> CAPABILITY
|
||||
SERVICE -[DOES_NOT_SUPPORT]-> CAPABILITY
|
||||
SERVICE -[HAS_LIMITATION]-> LIMITATION
|
||||
SERVICE -[HAS_SLA]-> SLA_VALUE
|
||||
|
||||
No inference allowed.
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
# 🏛️ Step 4 -- Oracle Property Graph Setup
|
||||
|
||||
Graph is created automatically:
|
||||
|
||||
CREATE PROPERTY GRAPH GRAPH_NAME
|
||||
VERTEX TABLES (...)
|
||||
EDGE TABLES (...)
|
||||
|
||||
Nodes are stored in:
|
||||
|
||||
KG_NODES_GRAPH_NAME
|
||||
|
||||
Edges in:
|
||||
|
||||
KG_EDGES_GRAPH_NAME
|
||||
|
||||
Evidence in:
|
||||
|
||||
KG_EVIDENCE_GRAPH_NAME
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
# 🧩 Step 5 -- Structured Node Properties (Important)
|
||||
|
||||
Each node includes structured JSON properties.
|
||||
|
||||
Default structure:
|
||||
|
||||
``` json
|
||||
{
|
||||
"metadata": {
|
||||
"created_by": "RFP_AI_V2",
|
||||
"version": "2.0",
|
||||
"created_at": "UTC_TIMESTAMP"
|
||||
},
|
||||
"analysis": {
|
||||
"confidence_score": null,
|
||||
"source": "DOCUMENT_RAG",
|
||||
"extraction_method": "LLM_TRIPLE_EXTRACTION"
|
||||
},
|
||||
"governance": {
|
||||
"validated": false,
|
||||
"review_required": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Implementation:
|
||||
|
||||
``` python
|
||||
def build_default_node_properties():
|
||||
return {
|
||||
"metadata": {
|
||||
"created_by": "RFP_AI_V2",
|
||||
"version": "2.0",
|
||||
"created_at": datetime.utcnow().isoformat()
|
||||
},
|
||||
"analysis": {
|
||||
"confidence_score": None,
|
||||
"source": "DOCUMENT_RAG",
|
||||
"extraction_method": "LLM_TRIPLE_EXTRACTION"
|
||||
},
|
||||
"governance": {
|
||||
"validated": False,
|
||||
"review_required": False
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This guarantees:
|
||||
|
||||
- No empty `{}` stored
|
||||
- Auditability
|
||||
- Governance extension capability
|
||||
- Enterprise extensibility
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
# 🔎 Step 6 -- Hybrid Retrieval Strategy
|
||||
|
||||
The system combines:
|
||||
|
||||
1. FAISS semantic recall
|
||||
2. Graph filtering via Oracle Text
|
||||
3. Graph-aware reranking
|
||||
4. Deterministic LLM evaluation
|
||||
|
||||
This ensures:
|
||||
|
||||
- High recall
|
||||
- High precision
|
||||
- No hallucination
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
# 🎯 Step 7 -- RFP Requirement Parsing
|
||||
|
||||
Each question becomes structured:
|
||||
|
||||
``` json
|
||||
{
|
||||
"requirement_type": "NON_FUNCTIONAL",
|
||||
"subject": "authentication",
|
||||
"expected_value": "MFA",
|
||||
"keywords": ["authentication", "mfa"]
|
||||
}
|
||||
```
|
||||
|
||||
This structure guides retrieval and evaluation.
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
# 📊 Step 8 -- Deterministic Decision Engine
|
||||
|
||||
LLM output format:
|
||||
|
||||
``` json
|
||||
{
|
||||
"answer": "YES | NO | PARTIAL",
|
||||
"confidence": "HIGH | MEDIUM | LOW",
|
||||
"justification": "Short factual explanation",
|
||||
"evidence": [
|
||||
{
|
||||
"quote": "Exact document text",
|
||||
"source": "Document reference"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
- If not explicitly stated → NO
|
||||
- No inference
|
||||
- Must provide documentary evidence
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
# 🌐 Step 9 -- Running the Application
|
||||
|
||||
Run preprocessing once:
|
||||
|
||||
python graphrag_rerank.py
|
||||
|
||||
Run web UI:
|
||||
|
||||
python app.py
|
||||
|
||||
Open:
|
||||
|
||||
http://localhost:8100
|
||||
|
||||
Or use REST:
|
||||
|
||||
curl -X POST http://localhost:8100/chat -H "Content-Type: application/json" -d '{"question": "Does the platform support MFA?"}'
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
# 🧪 Example RFP Questions
|
||||
|
||||
Security, SLA, Performance, Compliance, Vendor Lock-in, Backup,
|
||||
Governance.
|
||||
|
||||
The engine validates each with deterministic logic.
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
# 🔐 Design Principles
|
||||
|
||||
- Evidence-first
|
||||
- Deterministic outputs
|
||||
- Zero hallucination tolerance
|
||||
- Enterprise auditability
|
||||
- Structured graph reasoning
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
# 🚀 Future Extensions
|
||||
|
||||
- Confidence scoring via graph density
|
||||
- Weighted edge scoring
|
||||
- SLA numeric comparison engine
|
||||
- JSON-based filtering
|
||||
- PGQL advanced reasoning
|
||||
- Enterprise governance workflows
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
# 📌 Conclusion
|
||||
|
||||
Oracle GraphRAG RFP AI is not a chatbot.
|
||||
|
||||
It is a compliance validation engine built for enterprise RFP
|
||||
automation, legal due diligence, and procurement decision support.
|
||||
|
||||
Deterministic. Traceable. Expandable.
|
||||
126
files/app.py
126
files/app.py
@@ -1,67 +1,83 @@
|
||||
from flask import Flask, render_template, request, jsonify
|
||||
import traceback
|
||||
import json
|
||||
from flask import Flask
|
||||
|
||||
# 🔥 IMPORTA SEU PIPELINE
|
||||
from graphrag_rerank import answer_question
|
||||
from modules.users import users_bp
|
||||
from modules.home.routes import home_bp
|
||||
from modules.chat.routes import chat_bp
|
||||
from modules.excel.routes import excel_bp
|
||||
from modules.health.routes import health_bp
|
||||
from modules.architecture.routes import architecture_bp
|
||||
from modules.admin.routes import admin_bp
|
||||
from modules.auth.routes import auth_bp
|
||||
from modules.rest.routes import rest_bp
|
||||
|
||||
app = Flask(__name__)
|
||||
|
||||
def parse_llm_json(raw: str) -> dict:
|
||||
try:
|
||||
raw = raw.replace("```json", "")
|
||||
raw = raw.replace("```", "")
|
||||
return json.loads(raw)
|
||||
except Exception:
|
||||
return {
|
||||
"answer": "ERROR",
|
||||
"justification": "LLM returned invalid JSON",
|
||||
"raw_output": raw
|
||||
}
|
||||
|
||||
# =========================
|
||||
# Health check (Load Balancer)
|
||||
# =========================
|
||||
@app.route("/health", methods=["GET"])
|
||||
def health():
|
||||
return jsonify({"status": "UP"}), 200
|
||||
from config_loader import load_config
|
||||
from modules.excel.queue_manager import start_excel_worker
|
||||
from modules.users.service import create_user
|
||||
from modules.users.db import get_pool
|
||||
import bcrypt
|
||||
import oracledb
|
||||
from werkzeug.security import generate_password_hash
|
||||
|
||||
|
||||
# =========================
|
||||
# Página Web
|
||||
# =========================
|
||||
@app.route("/", methods=["GET"])
|
||||
def index():
|
||||
return render_template("index.html")
|
||||
def ensure_default_admin():
|
||||
"""
|
||||
Cria admin default direto no Oracle (sem SQLAlchemy)
|
||||
"""
|
||||
|
||||
pool = get_pool()
|
||||
|
||||
sql_check = "SELECT id FROM app_users WHERE user_role='admin'"
|
||||
sql_insert = """
|
||||
INSERT INTO app_users (name,email,user_role,password_hash,active)
|
||||
VALUES (:1,:2,'admin',:3,1) \
|
||||
"""
|
||||
|
||||
with pool.acquire() as conn:
|
||||
with conn.cursor() as cur:
|
||||
cur.execute(sql_check)
|
||||
if not cur.fetchone():
|
||||
pwd = generate_password_hash("admin123")
|
||||
cur.execute(sql_insert, ["Admin", "admin@local", pwd])
|
||||
conn.commit()
|
||||
print("Default admin created: admin@local / admin123")
|
||||
|
||||
|
||||
# =========================
|
||||
# Endpoint de Chat
|
||||
# =========================
|
||||
@app.route("/chat", methods=["POST"])
|
||||
def chat():
|
||||
try:
|
||||
data = request.get_json()
|
||||
question = data.get("question", "").strip()
|
||||
def create_app():
|
||||
|
||||
if not question:
|
||||
return jsonify({"error": "Empty question"}), 400
|
||||
app = Flask(__name__)
|
||||
app.secret_key = "super-secret"
|
||||
|
||||
raw_answer = answer_question(question)
|
||||
parsed_answer = parse_llm_json(raw_answer)
|
||||
# NÃO EXISTE MAIS SQLite
|
||||
# NÃO EXISTE MAIS SQLAlchemy
|
||||
|
||||
return jsonify({
|
||||
"question": question,
|
||||
"result": parsed_answer
|
||||
})
|
||||
start_excel_worker()
|
||||
|
||||
except Exception as e:
|
||||
traceback.print_exc()
|
||||
return jsonify({"error": str(e)}), 500
|
||||
# cria admin no Oracle
|
||||
ensure_default_admin()
|
||||
|
||||
app.register_blueprint(users_bp, url_prefix="/admin/users")
|
||||
app.register_blueprint(chat_bp)
|
||||
app.register_blueprint(excel_bp)
|
||||
app.register_blueprint(health_bp)
|
||||
app.register_blueprint(architecture_bp)
|
||||
app.register_blueprint(home_bp)
|
||||
app.register_blueprint(admin_bp, url_prefix="/admin")
|
||||
app.register_blueprint(auth_bp)
|
||||
app.register_blueprint(rest_bp)
|
||||
|
||||
from modules.core.security import get_current_user
|
||||
|
||||
@app.context_processor
|
||||
def inject_user():
|
||||
return dict(current_user=get_current_user())
|
||||
|
||||
return app
|
||||
|
||||
|
||||
app = create_app()
|
||||
|
||||
config = load_config()
|
||||
API_BASE_URL = f"{config.app_base}:{config.service_port}"
|
||||
|
||||
if __name__ == "__main__":
|
||||
app.run(
|
||||
host="0.0.0.0",
|
||||
port=8100,
|
||||
debug=False
|
||||
)
|
||||
app.run(host="0.0.0.0", port=config.service_port)
|
||||
26
files/config.json
Normal file
26
files/config.json
Normal file
@@ -0,0 +1,26 @@
|
||||
{
|
||||
"wallet_path": "Wallet_oradb23aiDev",
|
||||
"db_alias": "oradb23aiDev_high",
|
||||
"username": "admin",
|
||||
"password": "Moniquinha1972",
|
||||
|
||||
"service_endpoint": "https://inference.generativeai.us-chicago-1.oci.oraclecloud.com",
|
||||
"compartment_id": "ocid1.compartment.oc1..aaaaaaaaexpiw4a7dio64mkfv2t273s2hgdl6mgfvvyv7tycalnjlvpvfl3q",
|
||||
"auth_profile": "LATINOAMERICA",
|
||||
|
||||
"llm_model": "meta.llama-3.1-405b-instruct",
|
||||
"embedding_model": "cohere.embed-multilingual-v3.0",
|
||||
|
||||
"index_path": "./faiss_index",
|
||||
"docs_path": "./docs",
|
||||
|
||||
"graph_name": "OCI_5",
|
||||
"service_port": 8102,
|
||||
"app_base": "http://127.0.0.1",
|
||||
"dev_mode": 0,
|
||||
"service_server": "10.0.1.136",
|
||||
|
||||
"bucket_profile": "LATINOAMERICA-SaoPaulo",
|
||||
"oci_bucket": "genai_hoshikawa_bucket",
|
||||
"oci_namespace": "idi1o0a010nx"
|
||||
}
|
||||
205
files/faiss_to_oracle_vector.py
Normal file
205
files/faiss_to_oracle_vector.py
Normal file
@@ -0,0 +1,205 @@
|
||||
#!/usr/bin/env python3
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
"""
|
||||
FAISS → Oracle 23ai Vector migration (FULL GOVERNANCE VERSION)
|
||||
|
||||
Migra:
|
||||
- content
|
||||
- source
|
||||
- chunk_hash
|
||||
- origin
|
||||
- created_at
|
||||
- status
|
||||
- embedding
|
||||
|
||||
"""
|
||||
|
||||
import json
|
||||
import argparse
|
||||
import hashlib
|
||||
import datetime
|
||||
import oracledb
|
||||
|
||||
from langchain.vectorstores import FAISS
|
||||
from langchain.embeddings import HuggingFaceEmbeddings
|
||||
from tqdm import tqdm
|
||||
|
||||
|
||||
# =====================================================
|
||||
# CONFIG
|
||||
# =====================================================
|
||||
|
||||
VECTOR_DIM = 1024
|
||||
TABLE_NAME = "RAG_DOCS"
|
||||
BATCH_SIZE = 500
|
||||
|
||||
|
||||
# =====================================================
|
||||
# CLI
|
||||
# =====================================================
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("--faiss", required=True)
|
||||
parser.add_argument("--dsn", required=True)
|
||||
parser.add_argument("--user", required=True)
|
||||
parser.add_argument("--password", required=True)
|
||||
args = parser.parse_args()
|
||||
|
||||
|
||||
# =====================================================
|
||||
# HELPERS
|
||||
# =====================================================
|
||||
|
||||
def chunk_hash(text: str) -> str:
|
||||
return hashlib.sha256(text.encode("utf-8")).hexdigest()
|
||||
|
||||
|
||||
# =====================================================
|
||||
# 1) LOAD FAISS
|
||||
# =====================================================
|
||||
|
||||
print("🔄 Loading FAISS index...")
|
||||
|
||||
dummy_embeddings = HuggingFaceEmbeddings(
|
||||
model_name="sentence-transformers/all-MiniLM-L6-v2"
|
||||
)
|
||||
|
||||
vs = FAISS.load_local(
|
||||
args.faiss,
|
||||
dummy_embeddings,
|
||||
allow_dangerous_deserialization=True
|
||||
)
|
||||
|
||||
docs = vs.docstore._dict
|
||||
index = vs.index
|
||||
vectors = index.reconstruct_n(0, index.ntotal)
|
||||
|
||||
print(f"✅ Loaded {len(docs)} vectors")
|
||||
|
||||
# =========================
|
||||
# Oracle Autonomous Configuration
|
||||
# =========================
|
||||
WALLET_PATH = "Wallet_oradb23aiDev"
|
||||
DB_ALIAS = "oradb23aiDev_high"
|
||||
USERNAME = "admin"
|
||||
PASSWORD = "Moniquinha1972"
|
||||
os.environ["TNS_ADMIN"] = WALLET_PATH
|
||||
|
||||
# =====================================================
|
||||
# 2) CONNECT ORACLE
|
||||
# =====================================================
|
||||
|
||||
print("🔌 Connecting to Oracle...")
|
||||
|
||||
conn = oracledb.connect(
|
||||
user=USERNAME,
|
||||
password=PASSWORD,
|
||||
dsn=DB_ALIAS,
|
||||
config_dir=WALLET_PATH,
|
||||
wallet_location=WALLET_PATH,
|
||||
wallet_password=PASSWORD
|
||||
)
|
||||
|
||||
cur = conn.cursor()
|
||||
|
||||
|
||||
# =====================================================
|
||||
# 3) CREATE TABLE (FULL SCHEMA)
|
||||
# =====================================================
|
||||
|
||||
print("📦 Creating table if not exists...")
|
||||
|
||||
cur.execute(f"""
|
||||
BEGIN
|
||||
EXECUTE IMMEDIATE '
|
||||
CREATE TABLE {TABLE_NAME} (
|
||||
ID NUMBER GENERATED BY DEFAULT AS IDENTITY,
|
||||
CONTENT CLOB,
|
||||
SOURCE VARCHAR2(1000),
|
||||
CHUNK_HASH VARCHAR2(64),
|
||||
STATUS VARCHAR2(20),
|
||||
ORIGIN VARCHAR2(50),
|
||||
CREATED_AT TIMESTAMP,
|
||||
EMBED VECTOR({VECTOR_DIM})
|
||||
)';
|
||||
EXCEPTION
|
||||
WHEN OTHERS THEN
|
||||
IF SQLCODE != -955 THEN RAISE;
|
||||
END;
|
||||
""")
|
||||
|
||||
conn.commit()
|
||||
|
||||
|
||||
# =====================================================
|
||||
# 4) INSERT BATCH
|
||||
# =====================================================
|
||||
|
||||
print("⬆️ Migrating vectors...")
|
||||
|
||||
sql = f"""
|
||||
INSERT INTO {TABLE_NAME}
|
||||
(CONTENT, SOURCE, CHUNK_HASH, STATUS, ORIGIN, CREATED_AT, EMBED)
|
||||
VALUES (:1, :2, :3, :4, :5, :6, :7)
|
||||
"""
|
||||
|
||||
batch = []
|
||||
|
||||
for i, (doc_id, doc) in enumerate(tqdm(docs.items())):
|
||||
|
||||
content = doc.page_content
|
||||
source = doc.metadata.get("source", "")
|
||||
origin = doc.metadata.get("origin", "FAISS")
|
||||
created = doc.metadata.get(
|
||||
"created_at",
|
||||
datetime.datetime.utcnow()
|
||||
)
|
||||
|
||||
h = doc.metadata.get("chunk_hash") or chunk_hash(content)
|
||||
|
||||
batch.append((
|
||||
content,
|
||||
source,
|
||||
h,
|
||||
"ACTIVE",
|
||||
origin,
|
||||
created,
|
||||
json.dumps(vectors[i].tolist())
|
||||
))
|
||||
|
||||
if len(batch) >= BATCH_SIZE:
|
||||
cur.executemany(sql, batch)
|
||||
batch.clear()
|
||||
|
||||
if batch:
|
||||
cur.executemany(sql, batch)
|
||||
|
||||
conn.commit()
|
||||
|
||||
print("✅ Insert finished")
|
||||
|
||||
|
||||
# =====================================================
|
||||
# 5) CREATE VECTOR INDEX
|
||||
# =====================================================
|
||||
|
||||
print("⚡ Creating HNSW index...")
|
||||
|
||||
cur.execute(f"""
|
||||
BEGIN
|
||||
EXECUTE IMMEDIATE '
|
||||
CREATE VECTOR INDEX {TABLE_NAME}_IDX
|
||||
ON {TABLE_NAME}(EMBED)
|
||||
ORGANIZATION HNSW
|
||||
DISTANCE COSINE
|
||||
';
|
||||
EXCEPTION
|
||||
WHEN OTHERS THEN
|
||||
IF SQLCODE != -955 THEN RAISE;
|
||||
END;
|
||||
""")
|
||||
|
||||
conn.commit()
|
||||
|
||||
print("🎉 Migration complete!")
|
||||
File diff suppressed because it is too large
Load Diff
297
files/index.html
297
files/index.html
@@ -1,297 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<title>Oracle AI RFP Response</title>
|
||||
|
||||
<style>
|
||||
body {
|
||||
font-family: Arial, sans-serif;
|
||||
background: linear-gradient(to bottom right, #0f172a, #1e293b);
|
||||
min-height: 100vh;
|
||||
color: #e5e7eb;
|
||||
padding: 40px;
|
||||
}
|
||||
|
||||
h1 {
|
||||
text-align: center;
|
||||
margin-bottom: 40px;
|
||||
color: #e2e8f0;
|
||||
}
|
||||
|
||||
h2 {
|
||||
color: #93c5fd;
|
||||
margin-top: 0;
|
||||
}
|
||||
|
||||
h3 {
|
||||
color: #bfdbfe;
|
||||
margin-bottom: 6px;
|
||||
}
|
||||
|
||||
.card {
|
||||
width: 100%;
|
||||
max-width: 1600px;
|
||||
margin: 0 auto 40px auto;
|
||||
background: rgba(15, 23, 42, 0.75);
|
||||
padding: 32px 40px;
|
||||
border-radius: 18px;
|
||||
border: 1px solid #334155;
|
||||
box-shadow: 0 12px 32px rgba(0,0,0,0.45);
|
||||
}
|
||||
|
||||
.small {
|
||||
font-size: 13px;
|
||||
color: #94a3b8;
|
||||
line-height: 1.6;
|
||||
}
|
||||
|
||||
.highlight {
|
||||
color: #93c5fd;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
textarea {
|
||||
width: 100%;
|
||||
height: 110px;
|
||||
font-size: 16px;
|
||||
padding: 14px;
|
||||
border-radius: 8px;
|
||||
border: none;
|
||||
margin-top: 12px;
|
||||
}
|
||||
|
||||
button {
|
||||
margin-top: 14px;
|
||||
padding: 12px 26px;
|
||||
font-size: 16px;
|
||||
cursor: pointer;
|
||||
border-radius: 8px;
|
||||
border: none;
|
||||
background: #2563eb;
|
||||
color: white;
|
||||
}
|
||||
|
||||
button:hover {
|
||||
background: #1d4ed8;
|
||||
}
|
||||
|
||||
pre {
|
||||
background: #020617;
|
||||
padding: 22px;
|
||||
white-space: pre-wrap;
|
||||
border-radius: 10px;
|
||||
margin-top: 12px;
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
code {
|
||||
display: block;
|
||||
background: #020617;
|
||||
padding: 16px;
|
||||
border-radius: 10px;
|
||||
font-size: 13px;
|
||||
margin-top: 10px;
|
||||
color: #e5e7eb;
|
||||
}
|
||||
|
||||
ul {
|
||||
padding-left: 18px;
|
||||
}
|
||||
|
||||
hr {
|
||||
border: none;
|
||||
border-top: 1px solid #334155;
|
||||
margin: 28px 0;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
|
||||
<h1>🧠 Oracle AI RFP Response</h1>
|
||||
|
||||
<!-- ================= INTRODUCTION ================= -->
|
||||
<div class="card">
|
||||
|
||||
<p class="small">
|
||||
Oracle LAD A-Team<br/>
|
||||
<span class="highlight">Cristiano Hoshikawa</span><br/>
|
||||
<span class="highlight">cristiano.hoshikawa@oracle.com</span>
|
||||
</p>
|
||||
|
||||
<p class="small">
|
||||
Tutorial:
|
||||
<span class="highlight">https://docs.oracle.com/en/learn/oci-genai-pdf</span><br/>
|
||||
REST Service Endpoint:
|
||||
<span class="highlight">http://demo-orcl-api-ai.hoshikawa.com.br:8101/chat</span>
|
||||
</p>
|
||||
|
||||
<hr/>
|
||||
|
||||
<h2>Overview</h2>
|
||||
|
||||
<p class="small">
|
||||
This application provides an <strong>AI-assisted RFP response engine</strong> for
|
||||
Oracle Cloud Infrastructure (OCI).
|
||||
It analyzes natural language requirements and returns a
|
||||
<strong>structured, evidence-based technical response</strong>.
|
||||
</p>
|
||||
|
||||
<ul class="small">
|
||||
<li>Official Oracle technical documentation</li>
|
||||
<li>Semantic search using vector embeddings</li>
|
||||
<li>Knowledge Graph signals</li>
|
||||
<li>Large Language Models (LLMs)</li>
|
||||
</ul>
|
||||
|
||||
</div>
|
||||
|
||||
<!-- ================= TEST AREA ================= -->
|
||||
<div class="card">
|
||||
|
||||
<h2>Try It — Live RFP Question</h2>
|
||||
|
||||
<p class="small">
|
||||
Enter an RFP requirement or technical question below.
|
||||
The API will return a structured JSON response.
|
||||
</p>
|
||||
|
||||
<textarea id="question" placeholder="Example: Does OCI Compute support Real Application Clusters (RAC)?"></textarea>
|
||||
<button onclick="send()">Submit Question</button>
|
||||
|
||||
<h3>AI Response</h3>
|
||||
<pre id="answer"></pre>
|
||||
|
||||
</div>
|
||||
|
||||
<!-- ================= REST API DOC ================= -->
|
||||
<div class="card">
|
||||
|
||||
<h2>REST API Usage</h2>
|
||||
|
||||
<p class="small">
|
||||
The service exposes a <strong>POST</strong> endpoint that accepts a JSON payload.
|
||||
</p>
|
||||
|
||||
<code>
|
||||
curl -X POST http://demo-orcl-api-ai.hoshikawa.com.br:8101/chat \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"question": "Does Oracle Cloud Infrastructure (OCI) Compute support online resizing of memory for running virtual machine instances?"
|
||||
}'
|
||||
</code>
|
||||
|
||||
<h3>Request Parameters</h3>
|
||||
|
||||
<p class="small">
|
||||
<strong>question</strong> (string)<br/>
|
||||
Natural language description of an RFP requirement or technical capability.
|
||||
Small wording changes may affect how intent and evidence are interpreted.
|
||||
</p>
|
||||
|
||||
</div>
|
||||
|
||||
<!-- ================= JSON EXPLANATION ================= -->
|
||||
<div class="card">
|
||||
|
||||
<h2>AI Response JSON Structure</h2>
|
||||
|
||||
<p class="small">
|
||||
The API always returns a <strong>strict and normalized JSON structure</strong>,
|
||||
designed for traceability, auditing, and human validation.
|
||||
</p>
|
||||
|
||||
<h3>answer</h3>
|
||||
<p class="small">
|
||||
Final assessment of the requirement:
|
||||
<strong>YES</strong>, <strong>NO</strong>, or <strong>PARTIAL</strong>.
|
||||
A <strong>NO</strong> means the requirement is not explicitly satisfied as written.
|
||||
</p>
|
||||
|
||||
<h3>confidence</h3>
|
||||
<p class="small">
|
||||
Indicates the strength of the supporting evidence:
|
||||
HIGH, MEDIUM, or LOW.
|
||||
</p>
|
||||
|
||||
<h3>ambiguity_detected</h3>
|
||||
<p class="small">
|
||||
Flags whether the requirement is vague, overloaded, or open to interpretation.
|
||||
</p>
|
||||
|
||||
<h3>confidence_reason</h3>
|
||||
<p class="small">
|
||||
Short explanation justifying the confidence level.
|
||||
</p>
|
||||
|
||||
<h3>justification</h3>
|
||||
<p class="small">
|
||||
Technical rationale connecting the evidence to the requirement.
|
||||
This is not marketing text.
|
||||
</p>
|
||||
|
||||
<h3>evidence</h3>
|
||||
<p class="small">
|
||||
List of supporting references:
|
||||
</p>
|
||||
<ul class="small">
|
||||
<li><strong>quote</strong> – Exact extracted text</li>
|
||||
<li><strong>source</strong> – URL or document reference</li>
|
||||
</ul>
|
||||
|
||||
</div>
|
||||
|
||||
<!-- ================= DISCLAIMERS ================= -->
|
||||
<div class="card">
|
||||
|
||||
<h2>Important Notes</h2>
|
||||
|
||||
<ul class="small">
|
||||
<li>
|
||||
Responses are generated by an <strong>LLM</strong>.
|
||||
Even with low temperature, minor variations may occur across executions.
|
||||
</li>
|
||||
<li>
|
||||
Results depend on wording, terminology, and framing of the requirement.
|
||||
</li>
|
||||
<li>
|
||||
In many RFPs, an initial <strong>NO</strong> can be reframed into a valid
|
||||
<strong>YES</strong> by mapping the requirement to the correct OCI service.
|
||||
</li>
|
||||
<li>
|
||||
<strong>Human review is mandatory.</strong>
|
||||
This tool supports architects and RFP teams — it does not replace them.
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
<p class="small">
|
||||
GraphRAG • Oracle Autonomous Database 23ai • Embeddings • Knowledge Graph • LLM • Flask API
|
||||
</p>
|
||||
|
||||
</div>
|
||||
|
||||
<script>
|
||||
async function send() {
|
||||
const question = document.getElementById("question").value;
|
||||
const answerBox = document.getElementById("answer");
|
||||
|
||||
answerBox.textContent = "⏳ Processing request...";
|
||||
|
||||
const res = await fetch("/chat", {
|
||||
method: "POST",
|
||||
headers: {"Content-Type": "application/json"},
|
||||
body: JSON.stringify({ question })
|
||||
});
|
||||
|
||||
const data = await res.json();
|
||||
|
||||
if (data.result) {
|
||||
answerBox.textContent = JSON.stringify(data.result, null, 2);
|
||||
} else {
|
||||
answerBox.textContent = "❌ Error: " + JSON.stringify(data);
|
||||
}
|
||||
}
|
||||
</script>
|
||||
|
||||
</body>
|
||||
</html>
|
||||
160
files/modules/admin/routes.py
Normal file
160
files/modules/admin/routes.py
Normal file
@@ -0,0 +1,160 @@
|
||||
from flask import Blueprint, render_template, request, jsonify, redirect, flash
|
||||
from modules.core.security import requires_admin_auth
|
||||
from modules.core.audit import audit_log
|
||||
import threading
|
||||
from modules.core.audit import audit_log
|
||||
|
||||
from oci_genai_llm_graphrag_rerank_rfp import (
|
||||
search_chunks_for_invalidation,
|
||||
revoke_chunk_by_hash,
|
||||
get_chunk_metadata,
|
||||
add_manual_knowledge_entry,
|
||||
reload_all
|
||||
)
|
||||
|
||||
admin_bp = Blueprint("admin", __name__)
|
||||
|
||||
|
||||
# =========================
|
||||
# ADMIN HOME (invalidate UI)
|
||||
# =========================
|
||||
@admin_bp.route("/")
|
||||
@requires_admin_auth
|
||||
def admin_home():
|
||||
return render_template("admin_menu.html")
|
||||
|
||||
@admin_bp.route("/invalidate")
|
||||
@requires_admin_auth
|
||||
def invalidate_page():
|
||||
return render_template(
|
||||
"invalidate.html",
|
||||
results=[],
|
||||
statement=""
|
||||
)
|
||||
|
||||
# =========================
|
||||
# SEARCH CHUNKS
|
||||
# =========================
|
||||
@admin_bp.route("/search", methods=["POST"])
|
||||
@requires_admin_auth
|
||||
def search_for_invalidation():
|
||||
|
||||
statement = request.form["statement"]
|
||||
|
||||
docs = search_chunks_for_invalidation(statement)
|
||||
|
||||
hashes = [d.metadata.get("chunk_hash") for d in docs if d.metadata.get("chunk_hash")]
|
||||
meta = get_chunk_metadata(hashes)
|
||||
|
||||
results = []
|
||||
|
||||
for d in docs:
|
||||
h = d.metadata.get("chunk_hash")
|
||||
m = meta.get(h, {})
|
||||
|
||||
results.append({
|
||||
"chunk_hash": h,
|
||||
"source": d.metadata.get("source"),
|
||||
"text": d.page_content,
|
||||
"origin": m.get("origin"),
|
||||
"status": m.get("status")
|
||||
})
|
||||
|
||||
return render_template(
|
||||
"invalidate.html",
|
||||
statement=statement,
|
||||
results=results
|
||||
)
|
||||
|
||||
|
||||
# =========================
|
||||
# REVOKE
|
||||
# =========================
|
||||
@admin_bp.route("/revoke", methods=["POST"])
|
||||
@requires_admin_auth
|
||||
def revoke_chunk_ui():
|
||||
|
||||
data = request.get_json()
|
||||
|
||||
chunk_hash = str(data["chunk_hash"])
|
||||
reason = str(data.get("reason", "Manual revoke"))
|
||||
audit_log("INVALIDATE", f"chunk_hash={chunk_hash}")
|
||||
|
||||
print("chunk_hash", chunk_hash)
|
||||
print("reason", reason)
|
||||
|
||||
revoke_chunk_by_hash(chunk_hash, reason)
|
||||
|
||||
return {"status": "ok", "chunk_hash": chunk_hash}
|
||||
|
||||
|
||||
# =========================
|
||||
# ADD MANUAL KNOWLEDGE
|
||||
# =========================
|
||||
@admin_bp.route("/add-knowledge", methods=["POST"])
|
||||
@requires_admin_auth
|
||||
def add_manual_knowledge():
|
||||
|
||||
data = request.get_json(force=True)
|
||||
|
||||
chunk_hash = add_manual_knowledge_entry(
|
||||
text=data["text"],
|
||||
author="ADMIN",
|
||||
reason=data.get("reason"),
|
||||
source="MANUAL_INPUT",
|
||||
origin="MANUAL",
|
||||
also_update_graph=True
|
||||
)
|
||||
audit_log("ADD_KNOWLEDGE", f"chunk_hash={chunk_hash}")
|
||||
|
||||
return jsonify({
|
||||
"status": "OK",
|
||||
"chunk_hash": chunk_hash
|
||||
})
|
||||
|
||||
# =========================
|
||||
# UPDATE CHUNK
|
||||
# =========================
|
||||
@admin_bp.route("/update-chunk", methods=["POST"])
|
||||
@requires_admin_auth
|
||||
def update_chunk():
|
||||
|
||||
data = request.get_json() or {}
|
||||
|
||||
chunk_hash = str(data.get("chunk_hash", "")).strip()
|
||||
text = str(data.get("text", "")).strip()
|
||||
|
||||
print("chunk_hash", chunk_hash)
|
||||
print("text", text)
|
||||
|
||||
if not chunk_hash:
|
||||
return {"status": "error", "message": "missing hash"}, 400
|
||||
|
||||
reason = str(data.get("reason", "Manual change"))
|
||||
|
||||
revoke_chunk_by_hash(chunk_hash, reason=reason)
|
||||
chunk_hash = add_manual_knowledge_entry(
|
||||
text=text,
|
||||
author="ADMIN",
|
||||
reason=reason,
|
||||
source="MANUAL_INPUT",
|
||||
origin="MANUAL",
|
||||
also_update_graph=True
|
||||
)
|
||||
audit_log("UPDATE CHUNK", f"chunk_hash={chunk_hash}")
|
||||
|
||||
return jsonify({
|
||||
"status": "OK",
|
||||
"chunk_hash": chunk_hash
|
||||
})
|
||||
|
||||
@admin_bp.route("/reboot", methods=["POST"])
|
||||
@requires_admin_auth
|
||||
def reboot_service():
|
||||
# roda em background pra não travar request
|
||||
threading.Thread(target=reload_all, daemon=True).start()
|
||||
|
||||
return jsonify({
|
||||
"status": "ok",
|
||||
"message": "Knowledge reload started"
|
||||
})
|
||||
83
files/modules/architecture/routes.py
Normal file
83
files/modules/architecture/routes.py
Normal file
@@ -0,0 +1,83 @@
|
||||
from flask import Blueprint, request, jsonify
|
||||
import uuid
|
||||
import json
|
||||
from pathlib import Path
|
||||
from modules.core.audit import audit_log
|
||||
|
||||
from modules.core.security import requires_app_auth
|
||||
from .service import start_architecture_job
|
||||
from .store import ARCH_JOBS, ARCH_LOCK
|
||||
|
||||
architecture_bp = Blueprint("architecture", __name__)
|
||||
|
||||
ARCH_FOLDER = Path("architecture")
|
||||
|
||||
@architecture_bp.route("/architecture/start", methods=["POST"])
|
||||
@requires_app_auth
|
||||
def architecture_start():
|
||||
data = request.get_json(force=True) or {}
|
||||
question = (data.get("question") or "").strip()
|
||||
|
||||
if not question:
|
||||
return jsonify({"error": "Empty question"}), 400
|
||||
|
||||
job_id = str(uuid.uuid4())
|
||||
audit_log("ARCHITECTURE", f"job_id={job_id}")
|
||||
|
||||
with ARCH_LOCK:
|
||||
ARCH_JOBS[job_id] = {
|
||||
"status": "RUNNING",
|
||||
"logs": []
|
||||
}
|
||||
|
||||
start_architecture_job(job_id, question)
|
||||
return jsonify({"job_id": job_id})
|
||||
|
||||
|
||||
@architecture_bp.route("/architecture/<job_id>/status", methods=["GET"])
|
||||
@requires_app_auth
|
||||
def architecture_status(job_id):
|
||||
job_dir = ARCH_FOLDER / job_id
|
||||
status_file = job_dir / "status.json"
|
||||
|
||||
# fallback 1: status persistido
|
||||
if status_file.exists():
|
||||
try:
|
||||
return jsonify(json.loads(status_file.read_text(encoding="utf-8")))
|
||||
except Exception:
|
||||
return jsonify({"status": "ERROR", "detail": "Invalid status file"}), 500
|
||||
|
||||
# fallback 2: status em memória
|
||||
with ARCH_LOCK:
|
||||
job = ARCH_JOBS.get(job_id)
|
||||
|
||||
if job:
|
||||
return jsonify({"status": job.get("status", "PROCESSING")})
|
||||
|
||||
return jsonify({"status": "NOT_FOUND"}), 404
|
||||
|
||||
|
||||
@architecture_bp.route("/architecture/<job_id>/logs", methods=["GET"])
|
||||
@requires_app_auth
|
||||
def architecture_logs(job_id):
|
||||
with ARCH_LOCK:
|
||||
job = ARCH_JOBS.get(job_id, {})
|
||||
return jsonify({"logs": job.get("logs", [])})
|
||||
|
||||
@architecture_bp.route("/architecture/<job_id>/result", methods=["GET"])
|
||||
@requires_app_auth
|
||||
def architecture_result(job_id):
|
||||
job_dir = ARCH_FOLDER / job_id
|
||||
result_file = job_dir / "architecture.json"
|
||||
|
||||
# ainda não terminou
|
||||
if not result_file.exists():
|
||||
return jsonify({"error": "not ready"}), 404
|
||||
|
||||
try:
|
||||
raw = result_file.read_text(encoding="utf-8")
|
||||
plan = json.loads(raw)
|
||||
return jsonify(plan)
|
||||
|
||||
except Exception as e:
|
||||
return jsonify({"error": str(e)}), 500
|
||||
56
files/modules/architecture/service.py
Normal file
56
files/modules/architecture/service.py
Normal file
@@ -0,0 +1,56 @@
|
||||
import threading
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
from .store import ARCH_JOBS, ARCH_LOCK
|
||||
from oci_genai_llm_graphrag_rerank_rfp import call_architecture_planner, architecture_to_mermaid
|
||||
|
||||
ARCH_FOLDER = Path("architecture")
|
||||
ARCH_FOLDER.mkdir(exist_ok=True)
|
||||
|
||||
def make_job_logger(job_id: str):
|
||||
def _log(msg):
|
||||
with ARCH_LOCK:
|
||||
job = ARCH_JOBS.get(job_id)
|
||||
if job:
|
||||
job["logs"].append(str(msg))
|
||||
return _log
|
||||
|
||||
def start_architecture_job(job_id: str, question: str):
|
||||
job_dir = ARCH_FOLDER / job_id
|
||||
job_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
status_file = job_dir / "status.json"
|
||||
result_file = job_dir / "architecture.json"
|
||||
|
||||
def write_status(state: str, detail: str | None = None):
|
||||
payload = {"status": state}
|
||||
if detail:
|
||||
payload["detail"] = detail
|
||||
status_file.write_text(json.dumps(payload, ensure_ascii=False, indent=2), encoding="utf-8")
|
||||
|
||||
with ARCH_LOCK:
|
||||
if job_id in ARCH_JOBS:
|
||||
ARCH_JOBS[job_id]["status"] = state
|
||||
if detail:
|
||||
ARCH_JOBS[job_id]["detail"] = detail
|
||||
|
||||
write_status("PROCESSING")
|
||||
|
||||
def background():
|
||||
try:
|
||||
logger = make_job_logger(job_id)
|
||||
|
||||
plan = call_architecture_planner(question, log=logger)
|
||||
if not isinstance(plan, dict):
|
||||
raise TypeError(f"Planner returned {type(plan)}")
|
||||
|
||||
plan["mermaid"] = architecture_to_mermaid(plan)
|
||||
|
||||
result_file.write_text(json.dumps(plan, ensure_ascii=False, indent=2), encoding="utf-8")
|
||||
write_status("DONE")
|
||||
|
||||
except Exception as e:
|
||||
write_status("ERROR", str(e))
|
||||
|
||||
threading.Thread(target=background, daemon=True).start()
|
||||
4
files/modules/architecture/store.py
Normal file
4
files/modules/architecture/store.py
Normal file
@@ -0,0 +1,4 @@
|
||||
from threading import Lock
|
||||
|
||||
ARCH_LOCK = Lock()
|
||||
ARCH_JOBS = {}
|
||||
64
files/modules/auth/routes.py
Normal file
64
files/modules/auth/routes.py
Normal file
@@ -0,0 +1,64 @@
|
||||
from flask import Blueprint, render_template, request, redirect, url_for, flash, jsonify, session
|
||||
from modules.users.service import signup_user
|
||||
from config_loader import load_config
|
||||
from modules.users.service import create_user, authenticate_user
|
||||
|
||||
auth_bp = Blueprint(
|
||||
"auth",
|
||||
__name__,
|
||||
template_folder="../../templates/users"
|
||||
)
|
||||
|
||||
config = load_config()
|
||||
|
||||
@auth_bp.route("/signup", methods=["GET", "POST"])
|
||||
def signup():
|
||||
|
||||
if request.method == "POST":
|
||||
email = request.form.get("email")
|
||||
name = request.form.get("name")
|
||||
|
||||
try:
|
||||
link = signup_user(email, name)
|
||||
|
||||
if link and config.dev_mode == 1:
|
||||
flash(f"DEV MODE: password link → {link}", "success")
|
||||
else:
|
||||
flash("User created and email sent", "success")
|
||||
except Exception as e:
|
||||
flash(str(e), "danger")
|
||||
|
||||
return redirect(url_for("auth.signup"))
|
||||
|
||||
return render_template("users/signup.html")
|
||||
|
||||
@auth_bp.route("/register", methods=["POST"])
|
||||
def register():
|
||||
data = request.json
|
||||
create_user(data["username"], data["password"])
|
||||
return jsonify({"status": "ok"})
|
||||
|
||||
@auth_bp.route("/login", methods=["POST"])
|
||||
def login():
|
||||
|
||||
email = request.form.get("username")
|
||||
password = request.form.get("password")
|
||||
|
||||
ok = authenticate_user(email, password)
|
||||
|
||||
if not ok:
|
||||
flash("Invalid credentials")
|
||||
return redirect("/login")
|
||||
|
||||
session["user_email"] = email
|
||||
|
||||
return redirect("/")
|
||||
|
||||
@auth_bp.route("/login", methods=["GET"])
|
||||
def login_page():
|
||||
return render_template("users/login.html")
|
||||
|
||||
@auth_bp.route("/logout")
|
||||
def logout():
|
||||
session.clear() # remove tudo da sessão
|
||||
return redirect("/login")
|
||||
82
files/modules/chat/routes.py
Normal file
82
files/modules/chat/routes.py
Normal file
@@ -0,0 +1,82 @@
|
||||
import json
|
||||
from flask import Blueprint, request, jsonify
|
||||
from modules.core.security import requires_app_auth
|
||||
from oci_genai_llm_graphrag_rerank_rfp import answer_question, search_active_chunks
|
||||
from modules.core.audit import audit_log
|
||||
from .service import start_chat_job
|
||||
from .store import CHAT_JOBS, CHAT_LOCK
|
||||
|
||||
chat_bp = Blueprint("chat", __name__)
|
||||
|
||||
def parse_llm_json(raw: str) -> dict:
|
||||
try:
|
||||
if not isinstance(raw, str):
|
||||
return {"answer": "ERROR", "justification": "LLM output is not a string", "raw_output": str(raw)}
|
||||
raw = raw.replace("```json", "").replace("```", "").strip()
|
||||
return json.loads(raw)
|
||||
except Exception:
|
||||
return {"answer": "ERROR", "justification": "LLM returned invalid JSON", "raw_output": raw}
|
||||
|
||||
@chat_bp.route("/chat", methods=["POST"])
|
||||
@requires_app_auth
|
||||
def chat():
|
||||
data = request.get_json(force=True) or {}
|
||||
question = (data.get("question") or "").strip()
|
||||
|
||||
if not question:
|
||||
return jsonify({"error": "Empty question"}), 400
|
||||
|
||||
raw_answer = answer_question(question)
|
||||
parsed_answer = parse_llm_json(raw_answer)
|
||||
audit_log("RFP_QUESTION", f"question={question}")
|
||||
|
||||
# (opcional) manter comportamento antigo de evidence/full_text se você quiser
|
||||
# docs = search_active_chunks(question)
|
||||
|
||||
return jsonify({
|
||||
"question": question,
|
||||
"result": parsed_answer
|
||||
})
|
||||
|
||||
@chat_bp.post("/chat/start")
|
||||
def start():
|
||||
|
||||
question = request.json["question"]
|
||||
|
||||
job_id = start_chat_job(question)
|
||||
|
||||
return jsonify({"job_id": job_id})
|
||||
|
||||
@chat_bp.get("/chat/<job_id>/status")
|
||||
def status(job_id):
|
||||
|
||||
with CHAT_LOCK:
|
||||
job = CHAT_JOBS.get(job_id)
|
||||
|
||||
if not job:
|
||||
return jsonify({"error": "not found"}), 404
|
||||
|
||||
return jsonify({"status": job["status"]})
|
||||
|
||||
@chat_bp.get("/chat/<job_id>/result")
|
||||
def result(job_id):
|
||||
|
||||
with CHAT_LOCK:
|
||||
job = CHAT_JOBS.get(job_id)
|
||||
|
||||
if not job:
|
||||
return jsonify({"error": "not found"}), 404
|
||||
|
||||
return jsonify({
|
||||
"result": parse_llm_json(job["result"]),
|
||||
"error": job["error"]
|
||||
})
|
||||
|
||||
@chat_bp.get("/chat/<job_id>/logs")
|
||||
def logs(job_id):
|
||||
|
||||
with CHAT_LOCK:
|
||||
job = CHAT_JOBS.get(job_id)
|
||||
|
||||
return jsonify({"logs": job["logs"]})
|
||||
|
||||
44
files/modules/chat/service.py
Normal file
44
files/modules/chat/service.py
Normal file
@@ -0,0 +1,44 @@
|
||||
import threading
|
||||
import uuid
|
||||
from .store import CHAT_JOBS, CHAT_LOCK
|
||||
from oci_genai_llm_graphrag_rerank_rfp import answer_question
|
||||
|
||||
|
||||
def start_chat_job(question: str):
|
||||
|
||||
job_id = str(uuid.uuid4())
|
||||
|
||||
with CHAT_LOCK:
|
||||
CHAT_JOBS[job_id] = {
|
||||
"status": "PROCESSING",
|
||||
"result": None,
|
||||
"error": None,
|
||||
"logs": []
|
||||
}
|
||||
|
||||
def log(msg):
|
||||
with CHAT_LOCK:
|
||||
CHAT_JOBS[job_id]["logs"].append(str(msg))
|
||||
|
||||
def background():
|
||||
try:
|
||||
log("Starting answer_question()")
|
||||
|
||||
result = answer_question(question)
|
||||
|
||||
with CHAT_LOCK:
|
||||
CHAT_JOBS[job_id]["result"] = result
|
||||
CHAT_JOBS[job_id]["status"] = "DONE"
|
||||
|
||||
log("DONE")
|
||||
|
||||
except Exception as e:
|
||||
with CHAT_LOCK:
|
||||
CHAT_JOBS[job_id]["error"] = str(e)
|
||||
CHAT_JOBS[job_id]["status"] = "ERROR"
|
||||
|
||||
log(f"ERROR: {e}")
|
||||
|
||||
threading.Thread(target=background, daemon=True).start()
|
||||
|
||||
return job_id
|
||||
4
files/modules/chat/store.py
Normal file
4
files/modules/chat/store.py
Normal file
@@ -0,0 +1,4 @@
|
||||
import threading
|
||||
|
||||
CHAT_JOBS = {}
|
||||
CHAT_LOCK = threading.Lock()
|
||||
11
files/modules/core/audit.py
Normal file
11
files/modules/core/audit.py
Normal file
@@ -0,0 +1,11 @@
|
||||
from flask import session, request
|
||||
from datetime import datetime
|
||||
|
||||
def audit_log(action: str, detail: str = ""):
|
||||
email = session.get("user_email", "anonymous")
|
||||
ip = request.remote_addr
|
||||
|
||||
line = f"{datetime.utcnow().isoformat()} | {email} | {ip} | {action} | {detail}\n"
|
||||
|
||||
with open("audit.log", "a", encoding="utf-8") as f:
|
||||
f.write(line)
|
||||
92
files/modules/core/security.py
Normal file
92
files/modules/core/security.py
Normal file
@@ -0,0 +1,92 @@
|
||||
from functools import wraps
|
||||
from flask import request, Response, url_for, session, redirect
|
||||
from werkzeug.security import check_password_hash
|
||||
from modules.core.audit import audit_log
|
||||
from modules.users.db import get_pool
|
||||
|
||||
# =========================
|
||||
# Base authentication
|
||||
# =========================
|
||||
|
||||
def authenticate():
|
||||
return redirect(url_for("auth.login_page"))
|
||||
|
||||
def get_current_user():
|
||||
|
||||
email = session.get("user_email")
|
||||
if not email:
|
||||
return None
|
||||
|
||||
sql = """
|
||||
SELECT id, username, email, user_role, active
|
||||
FROM app_users
|
||||
WHERE email = :1 \
|
||||
"""
|
||||
|
||||
pool = get_pool()
|
||||
|
||||
with pool.acquire() as conn:
|
||||
with conn.cursor() as cur:
|
||||
cur.execute(sql, [email])
|
||||
row = cur.fetchone()
|
||||
|
||||
if not row:
|
||||
return None
|
||||
|
||||
return {
|
||||
"id": row[0],
|
||||
"username": row[1],
|
||||
"email": row[2],
|
||||
"role": row[3],
|
||||
"active": row[4]
|
||||
}
|
||||
|
||||
|
||||
# =========================
|
||||
# Decorators
|
||||
# =========================
|
||||
|
||||
def requires_login(f):
|
||||
@wraps(f)
|
||||
def wrapper(*args, **kwargs):
|
||||
user = get_current_user()
|
||||
if not user:
|
||||
return authenticate()
|
||||
return f(*args, **kwargs)
|
||||
return wrapper
|
||||
|
||||
|
||||
def requires_app_auth(f):
|
||||
@wraps(f)
|
||||
def wrapper(*args, **kwargs):
|
||||
user = get_current_user()
|
||||
|
||||
if not user:
|
||||
return authenticate()
|
||||
|
||||
role = (user.get("role") or "").strip().lower()
|
||||
|
||||
if role not in ["user", "admin"]:
|
||||
return authenticate()
|
||||
|
||||
audit_log("LOGIN_SUCCESS", f"user={user}")
|
||||
|
||||
return f(*args, **kwargs)
|
||||
return wrapper
|
||||
|
||||
|
||||
def requires_admin_auth(f):
|
||||
@wraps(f)
|
||||
def wrapper(*args, **kwargs):
|
||||
user = get_current_user()
|
||||
|
||||
if not user:
|
||||
return authenticate()
|
||||
|
||||
if user.get("role") != "admin":
|
||||
return authenticate()
|
||||
|
||||
audit_log("LOGIN_ADMIN_SUCCESS", f"user={user}")
|
||||
|
||||
return f(*args, **kwargs)
|
||||
return wrapper
|
||||
113
files/modules/excel/queue_manager.py
Normal file
113
files/modules/excel/queue_manager.py
Normal file
@@ -0,0 +1,113 @@
|
||||
from queue import Queue
|
||||
import threading
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# =========================================
|
||||
# CONFIG
|
||||
# =========================================
|
||||
|
||||
MAX_CONCURRENT_EXCEL = 10
|
||||
|
||||
# =========================================
|
||||
# STATE
|
||||
# =========================================
|
||||
|
||||
EXCEL_QUEUE = Queue()
|
||||
EXCEL_LOCK = threading.Lock()
|
||||
|
||||
ACTIVE_JOBS = set() # jobs em execução
|
||||
|
||||
# =========================================
|
||||
# Helpers
|
||||
# =========================================
|
||||
|
||||
def get_queue_position(job_id: str) -> int:
|
||||
"""
|
||||
Retorna:
|
||||
0 = já está executando
|
||||
1..N = posição na fila
|
||||
-1 = não encontrado
|
||||
"""
|
||||
with EXCEL_LOCK:
|
||||
|
||||
if job_id in ACTIVE_JOBS:
|
||||
return 0
|
||||
|
||||
items = list(EXCEL_QUEUE.queue)
|
||||
|
||||
for i, item in enumerate(items):
|
||||
if item["job_id"] == job_id:
|
||||
return i + 1
|
||||
|
||||
return -1
|
||||
|
||||
|
||||
# =========================================
|
||||
# WORKER
|
||||
# =========================================
|
||||
|
||||
def _worker(worker_id: int):
|
||||
logger.info(f"🟢 Excel worker-{worker_id} started")
|
||||
|
||||
while True:
|
||||
job = EXCEL_QUEUE.get()
|
||||
|
||||
job_id = job["job_id"]
|
||||
|
||||
try:
|
||||
with EXCEL_LOCK:
|
||||
ACTIVE_JOBS.add(job_id)
|
||||
|
||||
logger.info(f"🚀 [worker-{worker_id}] Processing {job_id}")
|
||||
|
||||
job["fn"](*job["args"], **job["kwargs"])
|
||||
|
||||
logger.info(f"✅ [worker-{worker_id}] Finished {job_id}")
|
||||
|
||||
except Exception as e:
|
||||
logger.exception(f"❌ [worker-{worker_id}] Failed {job_id}: {e}")
|
||||
|
||||
finally:
|
||||
with EXCEL_LOCK:
|
||||
ACTIVE_JOBS.discard(job_id)
|
||||
|
||||
EXCEL_QUEUE.task_done()
|
||||
|
||||
|
||||
# =========================================
|
||||
# START POOL
|
||||
# =========================================
|
||||
|
||||
def start_excel_worker():
|
||||
"""
|
||||
Inicia pool com N workers simultâneos
|
||||
"""
|
||||
for i in range(MAX_CONCURRENT_EXCEL):
|
||||
threading.Thread(
|
||||
target=_worker,
|
||||
args=(i + 1,),
|
||||
daemon=True
|
||||
).start()
|
||||
|
||||
logger.info(f"🔥 Excel queue started with {MAX_CONCURRENT_EXCEL} workers")
|
||||
|
||||
|
||||
# =========================================
|
||||
# ENQUEUE
|
||||
# =========================================
|
||||
|
||||
def enqueue_excel_job(job_id, fn, *args, **kwargs):
|
||||
job = {
|
||||
"job_id": job_id,
|
||||
"fn": fn,
|
||||
"args": args,
|
||||
"kwargs": kwargs
|
||||
}
|
||||
|
||||
with EXCEL_LOCK:
|
||||
EXCEL_QUEUE.put(job)
|
||||
position = EXCEL_QUEUE.qsize()
|
||||
|
||||
return position
|
||||
110
files/modules/excel/routes.py
Normal file
110
files/modules/excel/routes.py
Normal file
@@ -0,0 +1,110 @@
|
||||
from flask import Blueprint, request, jsonify, send_file, render_template
|
||||
from pathlib import Path
|
||||
import uuid
|
||||
import json
|
||||
from config_loader import load_config
|
||||
from modules.core.audit import audit_log
|
||||
from modules.core.security import get_current_user
|
||||
|
||||
from modules.core.security import requires_app_auth
|
||||
from .service import start_excel_job
|
||||
from .store import EXCEL_JOBS, EXCEL_LOCK
|
||||
|
||||
excel_bp = Blueprint("excel", __name__)
|
||||
config = load_config()
|
||||
API_BASE_URL = f"{config.app_base}:{config.service_port}"
|
||||
|
||||
UPLOAD_FOLDER = Path("./uploads")
|
||||
UPLOAD_FOLDER.mkdir(exist_ok=True)
|
||||
|
||||
ALLOWED_EXTENSIONS = {"xlsx"}
|
||||
API_URL = API_BASE_URL + "/chat"
|
||||
|
||||
|
||||
def allowed_file(filename):
|
||||
return "." in filename and filename.rsplit(".", 1)[1].lower() in ALLOWED_EXTENSIONS
|
||||
|
||||
|
||||
# =========================
|
||||
# Upload + start processing
|
||||
# =========================
|
||||
@excel_bp.route("/upload/excel", methods=["POST"])
|
||||
@requires_app_auth
|
||||
def upload_excel():
|
||||
file = request.files.get("file")
|
||||
email = request.form.get("email")
|
||||
|
||||
if not file or not email:
|
||||
return jsonify({"error": "file and email required"}), 400
|
||||
|
||||
if not allowed_file(file.filename):
|
||||
return jsonify({"error": "invalid file type"}), 400
|
||||
|
||||
job_id = str(uuid.uuid4())
|
||||
audit_log("UPLOAD_EXCEL", f"job_id={job_id}")
|
||||
|
||||
job_dir = UPLOAD_FOLDER / job_id
|
||||
job_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
input_path = job_dir / "input.xlsx"
|
||||
file.save(input_path)
|
||||
|
||||
with EXCEL_LOCK:
|
||||
EXCEL_JOBS[job_id] = {"status": "RUNNING"}
|
||||
|
||||
user = get_current_user()
|
||||
|
||||
start_excel_job(
|
||||
job_id=job_id,
|
||||
input_path=input_path,
|
||||
email=email,
|
||||
auth_user=None,
|
||||
auth_pass=None,
|
||||
api_url=API_URL
|
||||
)
|
||||
|
||||
return jsonify({"status": "STARTED", "job_id": job_id})
|
||||
|
||||
|
||||
# =========================
|
||||
# Status
|
||||
# =========================
|
||||
@excel_bp.route("/job/<job_id>/status")
|
||||
@requires_app_auth
|
||||
def job_status(job_id):
|
||||
status_file = UPLOAD_FOLDER / job_id / "status.json"
|
||||
|
||||
if not status_file.exists():
|
||||
return jsonify({"status": "PROCESSING"})
|
||||
|
||||
return jsonify(json.loads(status_file.read_text()))
|
||||
|
||||
|
||||
# =========================
|
||||
# Download result
|
||||
# =========================
|
||||
@excel_bp.route("/download/<job_id>")
|
||||
@requires_app_auth
|
||||
def download(job_id):
|
||||
result_file = UPLOAD_FOLDER / job_id / "result.xlsx"
|
||||
|
||||
if not result_file.exists():
|
||||
return jsonify({"error": "not ready"}), 404
|
||||
|
||||
return send_file(
|
||||
result_file,
|
||||
as_attachment=True,
|
||||
download_name=f"RFP_result_{job_id}.xlsx"
|
||||
)
|
||||
|
||||
@excel_bp.route("/job/<job_id>/logs", methods=["GET"])
|
||||
@requires_app_auth
|
||||
def excel_logs(job_id):
|
||||
with EXCEL_LOCK:
|
||||
job = EXCEL_JOBS.get(job_id, {})
|
||||
return jsonify({"logs": job.get("logs", [])})
|
||||
|
||||
@excel_bp.route("/excel/job/<job_id>")
|
||||
@requires_app_auth
|
||||
def job_page(job_id):
|
||||
return render_template("excel/job_status.html", job_id=job_id)
|
||||
115
files/modules/excel/service.py
Normal file
115
files/modules/excel/service.py
Normal file
@@ -0,0 +1,115 @@
|
||||
import threading
|
||||
import json
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
from flask import jsonify
|
||||
from .storage import upload_file, generate_download_url
|
||||
|
||||
from rfp_process import process_excel_rfp
|
||||
from .store import EXCEL_JOBS, EXCEL_LOCK
|
||||
from modules.users.email_service import send_completion_email
|
||||
from modules.excel.queue_manager import enqueue_excel_job
|
||||
|
||||
EXECUTION_METHOD = "QUEUE" # THREAD OR QUEUE
|
||||
|
||||
UPLOAD_FOLDER = Path("uploads")
|
||||
UPLOAD_FOLDER.mkdir(exist_ok=True)
|
||||
|
||||
|
||||
def make_job_logger(job_id: str):
|
||||
"""Logger simples: guarda logs na memória (igual ao arquiteto)."""
|
||||
def _log(msg):
|
||||
with EXCEL_LOCK:
|
||||
job = EXCEL_JOBS.get(job_id)
|
||||
if job is not None:
|
||||
job.setdefault("logs", []).append(str(msg))
|
||||
return _log
|
||||
|
||||
|
||||
def start_excel_job(job_id: str, input_path: Path, email: str, auth_user: str, auth_pass: str, api_url: str):
|
||||
|
||||
job_dir = UPLOAD_FOLDER / job_id
|
||||
job_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
output_path = job_dir / "result.xlsx"
|
||||
status_file = job_dir / "status.json"
|
||||
object_name = f"{job_id}/result.xlsx"
|
||||
|
||||
logger = make_job_logger(job_id)
|
||||
|
||||
def write_status(state: str, detail: str | None = None):
|
||||
payload = {
|
||||
"status": state,
|
||||
"updated_at": datetime.utcnow().isoformat(),
|
||||
}
|
||||
if detail:
|
||||
payload["detail"] = detail
|
||||
|
||||
status_file.write_text(
|
||||
json.dumps(payload, ensure_ascii=False, indent=2),
|
||||
encoding="utf-8"
|
||||
)
|
||||
|
||||
with EXCEL_LOCK:
|
||||
job = EXCEL_JOBS.get(job_id)
|
||||
if job is not None:
|
||||
job["status"] = state
|
||||
if detail:
|
||||
job["detail"] = detail
|
||||
|
||||
# garante estrutura do job na memória
|
||||
with EXCEL_LOCK:
|
||||
EXCEL_JOBS.setdefault(job_id, {})
|
||||
EXCEL_JOBS[job_id].setdefault("logs", [])
|
||||
EXCEL_JOBS[job_id]["status"] = "PROCESSING"
|
||||
|
||||
write_status("PROCESSING")
|
||||
logger(f"Starting Excel job {job_id}")
|
||||
logger(f"Input: {input_path}")
|
||||
logger(f"Output: {output_path}")
|
||||
|
||||
def background():
|
||||
try:
|
||||
# processamento principal
|
||||
process_excel_rfp(
|
||||
input_excel=input_path,
|
||||
output_excel=output_path,
|
||||
api_url=api_url,
|
||||
auth_user=auth_user,
|
||||
auth_pass=auth_pass,
|
||||
)
|
||||
|
||||
write_status("DONE")
|
||||
logger("Excel processing DONE")
|
||||
|
||||
upload_file(output_path, object_name)
|
||||
download_url = generate_download_url(object_name)
|
||||
|
||||
write_status("DONE", download_url)
|
||||
|
||||
# email / dev message
|
||||
dev_message = send_completion_email(email, download_url, job_id)
|
||||
if dev_message:
|
||||
logger(f"DEV email message/link: {dev_message}")
|
||||
|
||||
except Exception as e:
|
||||
logger(f"ERROR: {e}")
|
||||
write_status("ERROR", str(e))
|
||||
|
||||
try:
|
||||
dev_message = send_completion_email(
|
||||
email=email,
|
||||
download_url=download_url,
|
||||
job_id=job_id,
|
||||
status="ERROR",
|
||||
detail=str(e)
|
||||
)
|
||||
if dev_message:
|
||||
logger(f"DEV email error message/link: {dev_message}")
|
||||
except Exception as mail_err:
|
||||
logger(f"EMAIL ERROR: {mail_err}")
|
||||
|
||||
if EXECUTION_METHOD == "THREAD":
|
||||
threading.Thread(target=background, daemon=True).start()
|
||||
else:
|
||||
enqueue_excel_job(job_id, background)
|
||||
67
files/modules/excel/storage.py
Normal file
67
files/modules/excel/storage.py
Normal file
@@ -0,0 +1,67 @@
|
||||
import oci
|
||||
from datetime import datetime, timedelta
|
||||
from config_loader import load_config
|
||||
from oci.object_storage.models import CreatePreauthenticatedRequestDetails
|
||||
|
||||
config = load_config()
|
||||
|
||||
|
||||
oci_config = oci.config.from_file(
|
||||
file_location="~/.oci/config",
|
||||
profile_name=config.bucket_profile
|
||||
)
|
||||
|
||||
object_storage = oci.object_storage.ObjectStorageClient(oci_config)
|
||||
|
||||
|
||||
def _namespace():
|
||||
if config.oci_namespace != "auto":
|
||||
return config.oci_namespace
|
||||
|
||||
return object_storage.get_namespace().data
|
||||
|
||||
|
||||
# =========================
|
||||
# Upload file
|
||||
# =========================
|
||||
def upload_file(local_path: str, object_name: str):
|
||||
|
||||
with open(local_path, "rb") as f:
|
||||
object_storage.put_object(
|
||||
namespace_name=_namespace(),
|
||||
bucket_name=config.oci_bucket,
|
||||
object_name=object_name,
|
||||
put_object_body=f
|
||||
)
|
||||
print(f"SUCCESS on Upload {object_name}")
|
||||
|
||||
# =========================
|
||||
# Pre-authenticated download URL
|
||||
# =========================
|
||||
def generate_download_url(object_name: str, hours=24):
|
||||
|
||||
expire = datetime.utcnow() + timedelta(hours=hours)
|
||||
|
||||
details = CreatePreauthenticatedRequestDetails(
|
||||
name=f"job-{object_name}",
|
||||
access_type="ObjectRead",
|
||||
object_name=object_name,
|
||||
time_expires=expire
|
||||
)
|
||||
|
||||
response = object_storage.create_preauthenticated_request(
|
||||
namespace_name=_namespace(),
|
||||
bucket_name=config.oci_bucket,
|
||||
create_preauthenticated_request_details=details
|
||||
)
|
||||
|
||||
par = response.data
|
||||
|
||||
download_link = (
|
||||
f"https://objectstorage.{oci_config['region']}.oraclecloud.com{par.access_uri}"
|
||||
)
|
||||
|
||||
print("PAR CREATED OK")
|
||||
print(download_link)
|
||||
|
||||
return download_link
|
||||
4
files/modules/excel/store.py
Normal file
4
files/modules/excel/store.py
Normal file
@@ -0,0 +1,4 @@
|
||||
from threading import Lock
|
||||
|
||||
EXCEL_JOBS = {}
|
||||
EXCEL_LOCK = Lock()
|
||||
8
files/modules/health/routes.py
Normal file
8
files/modules/health/routes.py
Normal file
@@ -0,0 +1,8 @@
|
||||
from flask import Blueprint, jsonify
|
||||
|
||||
health_bp = Blueprint("health", __name__)
|
||||
|
||||
|
||||
@health_bp.route("/health")
|
||||
def health():
|
||||
return jsonify({"status": "UP"})
|
||||
16
files/modules/home/routes.py
Normal file
16
files/modules/home/routes.py
Normal file
@@ -0,0 +1,16 @@
|
||||
from flask import Blueprint, render_template
|
||||
from modules.core.security import requires_app_auth
|
||||
from config_loader import load_config
|
||||
|
||||
home_bp = Blueprint("home", __name__)
|
||||
config = load_config()
|
||||
API_BASE_URL = f"{config.app_base}:{config.service_port}"
|
||||
|
||||
@home_bp.route("/")
|
||||
@requires_app_auth
|
||||
def index():
|
||||
return render_template(
|
||||
"index.html",
|
||||
api_base_url=API_BASE_URL,
|
||||
config=config
|
||||
)
|
||||
29
files/modules/rest/routes.py
Normal file
29
files/modules/rest/routes.py
Normal file
@@ -0,0 +1,29 @@
|
||||
from flask import Blueprint, request, jsonify
|
||||
from modules.rest.security import rest_auth_required
|
||||
from modules.chat.service import answer_question # reutiliza lógica
|
||||
|
||||
rest_bp = Blueprint("rest", __name__, url_prefix="/rest")
|
||||
|
||||
|
||||
import json
|
||||
|
||||
@rest_bp.route("/chat", methods=["POST"])
|
||||
@rest_auth_required
|
||||
def rest_chat():
|
||||
data = request.get_json(force=True) or {}
|
||||
|
||||
question = (data.get("question") or "").strip()
|
||||
if not question:
|
||||
return jsonify({"error": "question required"}), 400
|
||||
|
||||
raw_result = answer_question(question)
|
||||
|
||||
try:
|
||||
parsed = json.loads(raw_result)
|
||||
except Exception:
|
||||
return jsonify({
|
||||
"error": "invalid LLM response",
|
||||
"raw": raw_result
|
||||
}), 500
|
||||
|
||||
return json.dumps(parsed)
|
||||
30
files/modules/rest/security.py
Normal file
30
files/modules/rest/security.py
Normal file
@@ -0,0 +1,30 @@
|
||||
import base64
|
||||
from flask import request, jsonify
|
||||
from functools import wraps
|
||||
from modules.users.service import authenticate_user
|
||||
|
||||
|
||||
def rest_auth_required(f):
|
||||
@wraps(f)
|
||||
def wrapper(*args, **kwargs):
|
||||
auth = request.headers.get("Authorization")
|
||||
|
||||
if not auth or not auth.startswith("Basic "):
|
||||
return jsonify({"error": "authorization required"}), 401
|
||||
|
||||
try:
|
||||
decoded = base64.b64decode(auth.split(" ")[1]).decode()
|
||||
username, password = decoded.split(":", 1)
|
||||
except Exception:
|
||||
return jsonify({"error": "invalid authorization header"}), 401
|
||||
|
||||
user = authenticate_user(username, password)
|
||||
if not user:
|
||||
return jsonify({"error": "invalid credentials"}), 401
|
||||
|
||||
# opcional: passar user adiante
|
||||
request.rest_user = user
|
||||
|
||||
return f(*args, **kwargs)
|
||||
|
||||
return wrapper
|
||||
4
files/modules/users/__init__.py
Normal file
4
files/modules/users/__init__.py
Normal file
@@ -0,0 +1,4 @@
|
||||
from .routes import users_bp
|
||||
from .model import db
|
||||
|
||||
__all__ = ["users_bp", "db"]
|
||||
50
files/modules/users/db.py
Normal file
50
files/modules/users/db.py
Normal file
@@ -0,0 +1,50 @@
|
||||
from pathlib import Path
|
||||
import os
|
||||
import re
|
||||
import oracledb
|
||||
import json
|
||||
import base64
|
||||
import hashlib
|
||||
from datetime import datetime
|
||||
import requests
|
||||
import textwrap
|
||||
import unicodedata
|
||||
from typing import Optional
|
||||
from collections import deque
|
||||
from config_loader import load_config
|
||||
|
||||
def chunk_hash(text: str) -> str:
|
||||
return hashlib.sha256(text.encode("utf-8")).hexdigest()
|
||||
|
||||
config = load_config()
|
||||
|
||||
# =========================
|
||||
# Oracle Autonomous Configuration
|
||||
# =========================
|
||||
WALLET_PATH = config.wallet_path
|
||||
DB_ALIAS = config.db_alias
|
||||
USERNAME = config.username
|
||||
PASSWORD = config.password
|
||||
os.environ["TNS_ADMIN"] = WALLET_PATH
|
||||
|
||||
_pool = None
|
||||
|
||||
def get_pool():
|
||||
global _pool
|
||||
|
||||
if _pool:
|
||||
return _pool
|
||||
|
||||
_pool = oracledb.create_pool(
|
||||
user=USERNAME,
|
||||
password=PASSWORD,
|
||||
dsn=DB_ALIAS,
|
||||
config_dir=WALLET_PATH,
|
||||
wallet_location=WALLET_PATH,
|
||||
wallet_password=PASSWORD,
|
||||
min=2,
|
||||
max=8,
|
||||
increment=1
|
||||
)
|
||||
|
||||
return _pool
|
||||
72
files/modules/users/email_service.py
Normal file
72
files/modules/users/email_service.py
Normal file
@@ -0,0 +1,72 @@
|
||||
import os
|
||||
import smtplib
|
||||
from email.message import EmailMessage
|
||||
from flask import current_app
|
||||
from config_loader import load_config
|
||||
|
||||
config = load_config()
|
||||
API_BASE_URL = f"{config.app_base}:{config.service_port}"
|
||||
|
||||
def send_user_created_email(email, link, name=""):
|
||||
"""
|
||||
DEV -> return link only
|
||||
PROD -> send real email
|
||||
"""
|
||||
|
||||
if config.dev_mode == 1:
|
||||
return link # 👈 só devolve o link
|
||||
|
||||
host = os.getenv("RFP_SMTP_HOST", "localhost")
|
||||
port = int(os.getenv("RFP_SMTP_PORT", 25))
|
||||
|
||||
msg = EmailMessage()
|
||||
msg["Subject"] = "Your account has been created"
|
||||
msg["From"] = "noreply@rfp.local"
|
||||
msg["To"] = email
|
||||
|
||||
msg.set_content(f"""
|
||||
Hello {name or email},
|
||||
|
||||
Your account was created.
|
||||
|
||||
Set your password here:
|
||||
{link}
|
||||
""")
|
||||
|
||||
with smtplib.SMTP(host, port) as s:
|
||||
s.send_message(msg)
|
||||
|
||||
return link
|
||||
|
||||
def send_completion_email(email, download_url, job_id):
|
||||
"""
|
||||
DEV -> return download link
|
||||
PROD -> send real email
|
||||
"""
|
||||
|
||||
if config.dev_mode == 1:
|
||||
return download_url # 👈 só devolve o link no DEV
|
||||
|
||||
host = os.getenv("RFP_SMTP_HOST", "localhost")
|
||||
port = int(os.getenv("RFP_SMTP_PORT", 25))
|
||||
|
||||
msg = EmailMessage()
|
||||
msg["Subject"] = "Your RFP processing is complete"
|
||||
msg["From"] = "noreply@rfp.local"
|
||||
msg["To"] = email
|
||||
|
||||
msg.set_content(f"""
|
||||
Hello,
|
||||
|
||||
Your RFP Excel processing has finished successfully.
|
||||
|
||||
Download your file here:
|
||||
{download_url}
|
||||
|
||||
Job ID: {job_id}
|
||||
""")
|
||||
|
||||
with smtplib.SMTP(host, port) as s:
|
||||
s.send_message(msg)
|
||||
|
||||
return None
|
||||
27
files/modules/users/model.py
Normal file
27
files/modules/users/model.py
Normal file
@@ -0,0 +1,27 @@
|
||||
from datetime import datetime
|
||||
from flask_sqlalchemy import SQLAlchemy
|
||||
|
||||
db = SQLAlchemy()
|
||||
|
||||
|
||||
class User(db.Model):
|
||||
__tablename__ = "users"
|
||||
|
||||
id = db.Column(db.Integer, primary_key=True)
|
||||
|
||||
name = db.Column(db.String(120), nullable=False)
|
||||
email = db.Column(db.String(160), unique=True, nullable=False, index=True)
|
||||
|
||||
role = db.Column(db.String(50), default="app") # app | admin
|
||||
active = db.Column(db.Boolean, default=True)
|
||||
|
||||
password_hash = db.Column(db.String(255))
|
||||
must_change_password = db.Column(db.Boolean, default=True)
|
||||
|
||||
reset_token = db.Column(db.String(255))
|
||||
reset_expire = db.Column(db.DateTime)
|
||||
|
||||
created_at = db.Column(db.DateTime, default=datetime.utcnow)
|
||||
|
||||
def __repr__(self):
|
||||
return f"<User {self.email}>"
|
||||
157
files/modules/users/routes.py
Normal file
157
files/modules/users/routes.py
Normal file
@@ -0,0 +1,157 @@
|
||||
from flask import Blueprint, render_template, request, redirect, url_for, flash
|
||||
from modules.core.security import requires_admin_auth
|
||||
|
||||
from .service import (
|
||||
signup_user,
|
||||
list_users as svc_list_users,
|
||||
create_user,
|
||||
update_user,
|
||||
delete_user as svc_delete_user,
|
||||
get_user_by_token,
|
||||
set_password_service
|
||||
)
|
||||
|
||||
from .token_service import generate_token, expiration, is_expired
|
||||
from .email_service import send_user_created_email
|
||||
from config_loader import load_config
|
||||
|
||||
users_bp = Blueprint(
|
||||
"users",
|
||||
__name__,
|
||||
template_folder="../../templates/users"
|
||||
)
|
||||
|
||||
config = load_config()
|
||||
|
||||
|
||||
# =========================
|
||||
# LIST USERS (Oracle)
|
||||
# =========================
|
||||
@users_bp.route("/")
|
||||
@requires_admin_auth
|
||||
def list_users():
|
||||
users = svc_list_users()
|
||||
return render_template("list.html", users=users)
|
||||
|
||||
|
||||
# =========================
|
||||
# PUBLIC SIGNUP (Oracle)
|
||||
# =========================
|
||||
@users_bp.route("/signup", methods=["GET", "POST"])
|
||||
def signup():
|
||||
|
||||
if request.method == "POST":
|
||||
email = request.form.get("email", "").strip()
|
||||
name = request.form.get("name", "").strip()
|
||||
|
||||
try:
|
||||
link = signup_user(email=email, name=name)
|
||||
except Exception as e:
|
||||
flash(str(e), "danger")
|
||||
return render_template("users/signup.html")
|
||||
|
||||
if link and config.dev_mode == 1:
|
||||
flash(f"DEV MODE: password link → {link}", "success")
|
||||
else:
|
||||
flash("User created and email sent", "success")
|
||||
|
||||
return redirect(url_for("users.signup"))
|
||||
|
||||
return render_template("users/signup.html")
|
||||
|
||||
|
||||
# =========================
|
||||
# CREATE USER (Oracle)
|
||||
# =========================
|
||||
@users_bp.route("/new", methods=["GET", "POST"])
|
||||
@requires_admin_auth
|
||||
def new_user():
|
||||
|
||||
if request.method == "POST":
|
||||
|
||||
token = generate_token()
|
||||
|
||||
create_user(
|
||||
name=request.form["name"],
|
||||
email=request.form["email"],
|
||||
role=request.form["role"],
|
||||
active="active" in request.form,
|
||||
token=token
|
||||
)
|
||||
|
||||
link = url_for("users.set_password", token=token, _external=True)
|
||||
|
||||
dev_link = send_user_created_email(
|
||||
request.form["email"],
|
||||
link,
|
||||
request.form["name"]
|
||||
)
|
||||
|
||||
flash("User created and email sent", "success")
|
||||
return redirect(url_for("users.list_users"))
|
||||
|
||||
return render_template("form.html", user=None)
|
||||
|
||||
|
||||
# =========================
|
||||
# EDIT USER (Oracle)
|
||||
# =========================
|
||||
@users_bp.route("/edit/<int:user_id>", methods=["GET", "POST"])
|
||||
@requires_admin_auth
|
||||
def edit_user(user_id):
|
||||
|
||||
if request.method == "POST":
|
||||
update_user(
|
||||
user_id=user_id,
|
||||
name=request.form["name"],
|
||||
email=request.form["email"],
|
||||
role=request.form["role"],
|
||||
active="active" in request.form
|
||||
)
|
||||
|
||||
return redirect(url_for("users.list_users"))
|
||||
|
||||
# busca lista inteira e filtra (simples e funciona bem)
|
||||
users = svc_list_users()
|
||||
user = next((u for u in users if u["id"] == user_id), None)
|
||||
|
||||
return render_template("form.html", user=user)
|
||||
|
||||
|
||||
# =========================
|
||||
# DELETE USER (Oracle)
|
||||
# =========================
|
||||
@users_bp.route("/delete/<int:user_id>")
|
||||
@requires_admin_auth
|
||||
def delete_user(user_id):
|
||||
|
||||
svc_delete_user(user_id)
|
||||
return redirect(url_for("users.list_users"))
|
||||
|
||||
|
||||
# =========================
|
||||
# SET PASSWORD (Oracle)
|
||||
# =========================
|
||||
@users_bp.route("/set-password/<token>", methods=["GET", "POST"])
|
||||
def set_password(token):
|
||||
|
||||
user = get_user_by_token(token)
|
||||
|
||||
if not user or is_expired(user["expire"]):
|
||||
return render_template("set_password.html", expired=True)
|
||||
|
||||
if request.method == "POST":
|
||||
|
||||
pwd = request.form["password"]
|
||||
pwd2 = request.form["password2"]
|
||||
|
||||
if pwd != pwd2:
|
||||
flash("Passwords do not match")
|
||||
return render_template("set_password.html", expired=False)
|
||||
|
||||
set_password_service(user["id"], pwd)
|
||||
|
||||
flash("Password updated successfully")
|
||||
return redirect("/")
|
||||
|
||||
return render_template("set_password.html", expired=False)
|
||||
204
files/modules/users/service.py
Normal file
204
files/modules/users/service.py
Normal file
@@ -0,0 +1,204 @@
|
||||
#from .model import db, User
|
||||
from .token_service import generate_token, expiration
|
||||
from .email_service import send_user_created_email
|
||||
from config_loader import load_config
|
||||
from .db import get_pool
|
||||
import bcrypt
|
||||
from werkzeug.security import generate_password_hash, check_password_hash
|
||||
|
||||
config = load_config()
|
||||
|
||||
def authenticate_user(username: str, password: str):
|
||||
|
||||
print("LOGIN TRY:", username, password)
|
||||
|
||||
sql = """
|
||||
SELECT password_hash
|
||||
FROM app_users
|
||||
WHERE email = :1 \
|
||||
"""
|
||||
|
||||
pool = get_pool()
|
||||
|
||||
with pool.acquire() as conn:
|
||||
with conn.cursor() as cur:
|
||||
cur.execute(sql, [username])
|
||||
row = cur.fetchone()
|
||||
|
||||
# print("ROW:", row)
|
||||
|
||||
if not row:
|
||||
# print("USER NOT FOUND")
|
||||
return False
|
||||
|
||||
stored_hash = row[0]
|
||||
# print("HASH:", stored_hash)
|
||||
|
||||
ok = check_password_hash(stored_hash, password)
|
||||
|
||||
# print("MATCH:", ok)
|
||||
|
||||
return ok
|
||||
|
||||
def create_user(username: str, password: str):
|
||||
|
||||
hashed = bcrypt.hashpw(password.encode(), bcrypt.gensalt()).decode()
|
||||
|
||||
sql = """
|
||||
INSERT INTO app_users (username, password_hash)
|
||||
VALUES (:1, :2) \
|
||||
"""
|
||||
|
||||
pool = get_pool()
|
||||
|
||||
with pool.acquire() as conn:
|
||||
with conn.cursor() as cur:
|
||||
cur.execute(sql, [username, hashed])
|
||||
conn.commit()
|
||||
|
||||
def _default_name(email: str) -> str:
|
||||
return (email or "").split("@")[0]
|
||||
|
||||
|
||||
def signup_user(email: str, name: str = ""):
|
||||
|
||||
if not email:
|
||||
raise ValueError("Email required")
|
||||
|
||||
email = email.lower().strip()
|
||||
name = name or email.split("@")[0]
|
||||
|
||||
token = generate_token()
|
||||
|
||||
pool = get_pool()
|
||||
|
||||
sql_check = """
|
||||
SELECT id
|
||||
FROM app_users
|
||||
WHERE email = :1 \
|
||||
"""
|
||||
|
||||
sql_insert = """
|
||||
INSERT INTO app_users
|
||||
(name,email,user_role,active,reset_token,reset_expire,must_change_password)
|
||||
VALUES (:1,:2,'user',1,:3,:4,1) \
|
||||
"""
|
||||
|
||||
sql_update = """
|
||||
UPDATE app_users
|
||||
SET reset_token=:1,
|
||||
reset_expire=:2,
|
||||
must_change_password=1
|
||||
WHERE email=:3 \
|
||||
"""
|
||||
|
||||
with pool.acquire() as conn:
|
||||
with conn.cursor() as cur:
|
||||
|
||||
cur.execute(sql_check, [email])
|
||||
row = cur.fetchone()
|
||||
|
||||
if not row:
|
||||
cur.execute(sql_insert, [name, email, token, expiration()])
|
||||
else:
|
||||
cur.execute(sql_update, [token, expiration(), email])
|
||||
|
||||
conn.commit()
|
||||
|
||||
link = f"{config.app_base}:{config.service_port}/admin/users/set-password/{token}"
|
||||
|
||||
dev_link = send_user_created_email(email, link, name)
|
||||
|
||||
return dev_link or link
|
||||
|
||||
def list_users():
|
||||
sql = """
|
||||
SELECT id, name, email, user_role, active
|
||||
FROM app_users
|
||||
ORDER BY name \
|
||||
"""
|
||||
|
||||
pool = get_pool()
|
||||
|
||||
with pool.acquire() as conn:
|
||||
with conn.cursor() as cur:
|
||||
cur.execute(sql)
|
||||
cols = [c[0].lower() for c in cur.description]
|
||||
return [dict(zip(cols, r)) for r in cur.fetchall()]
|
||||
|
||||
def create_user(name, email, role, active, token):
|
||||
sql = """
|
||||
INSERT INTO app_users
|
||||
(name,email,user_role,active,reset_token,reset_expire,must_change_password)
|
||||
VALUES (:1,:2,:3,:4,:5,SYSTIMESTAMP + INTERVAL '1' DAY,1) \
|
||||
"""
|
||||
|
||||
pool = get_pool()
|
||||
|
||||
with pool.acquire() as conn:
|
||||
with conn.cursor() as cur:
|
||||
cur.execute(sql, [name, email, role, active, token])
|
||||
conn.commit()
|
||||
|
||||
def update_user(user_id, name, email, role, active):
|
||||
sql = """
|
||||
UPDATE app_users
|
||||
SET name=:1, email=:2, user_role=:3, active=:4
|
||||
WHERE id=:5 \
|
||||
"""
|
||||
|
||||
pool = get_pool()
|
||||
|
||||
with pool.acquire() as conn:
|
||||
with conn.cursor() as cur:
|
||||
cur.execute(sql, [name, email, role, active, user_id])
|
||||
conn.commit()
|
||||
|
||||
def delete_user(user_id):
|
||||
sql = "DELETE FROM app_users WHERE id=:1"
|
||||
|
||||
pool = get_pool()
|
||||
|
||||
with pool.acquire() as conn:
|
||||
with conn.cursor() as cur:
|
||||
cur.execute(sql, [user_id])
|
||||
conn.commit()
|
||||
|
||||
def get_user_by_token(token):
|
||||
sql = """
|
||||
SELECT id, reset_expire
|
||||
FROM app_users
|
||||
WHERE reset_token=:1 \
|
||||
"""
|
||||
|
||||
pool = get_pool()
|
||||
|
||||
with pool.acquire() as conn:
|
||||
with conn.cursor() as cur:
|
||||
cur.execute(sql, [token])
|
||||
row = cur.fetchone()
|
||||
|
||||
if not row:
|
||||
return None
|
||||
|
||||
return {"id": row[0], "expire": row[1]}
|
||||
|
||||
def set_password_service(user_id, pwd):
|
||||
hashed = generate_password_hash(pwd)
|
||||
|
||||
sql = """
|
||||
UPDATE app_users
|
||||
SET password_hash=:1,
|
||||
must_change_password=0,
|
||||
reset_token=NULL,
|
||||
reset_expire=NULL
|
||||
WHERE id=:2 \
|
||||
"""
|
||||
|
||||
pool = get_pool()
|
||||
|
||||
with pool.acquire() as conn:
|
||||
with conn.cursor() as cur:
|
||||
cur.execute(sql, [hashed, user_id])
|
||||
conn.commit()
|
||||
|
||||
14
files/modules/users/token_service.py
Normal file
14
files/modules/users/token_service.py
Normal file
@@ -0,0 +1,14 @@
|
||||
import secrets
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
|
||||
def generate_token():
|
||||
return secrets.token_urlsafe(48)
|
||||
|
||||
|
||||
def expiration(hours=24):
|
||||
return datetime.utcnow() + timedelta(hours=hours)
|
||||
|
||||
|
||||
def is_expired(expire_dt):
|
||||
return not expire_dt or expire_dt < datetime.utcnow()
|
||||
3095
files/oci_genai_llm_graphrag_rerank_rfp.py
Normal file
3095
files/oci_genai_llm_graphrag_rerank_rfp.py
Normal file
File diff suppressed because it is too large
Load Diff
112
files/pgql_oracle23ai.sql
Normal file
112
files/pgql_oracle23ai.sql
Normal file
@@ -0,0 +1,112 @@
|
||||
-- Tabela de entidades
|
||||
CREATE TABLE entities (
|
||||
id NUMBER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
|
||||
name VARCHAR2(255) UNIQUE NOT NULL
|
||||
);
|
||||
|
||||
-- Tabela de relações
|
||||
CREATE TABLE relations (
|
||||
id NUMBER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
|
||||
from_entity_id NUMBER REFERENCES entities(id),
|
||||
to_entity_id NUMBER REFERENCES entities(id),
|
||||
relation VARCHAR2(255),
|
||||
source_text VARCHAR2(1000)
|
||||
);
|
||||
|
||||
BEGIN
|
||||
ordsadmin.graph_view_admin.create_graph_view(
|
||||
graph_view_name => 'my_graph',
|
||||
vertex_table_names => 'ENTITIES',
|
||||
edge_table_names => 'RELATIONS',
|
||||
vertex_id_column => 'ID',
|
||||
edge_source_column => 'FROM_ENTITY_ID',
|
||||
edge_destination_column => 'TO_ENTITY_ID'
|
||||
);
|
||||
END;
|
||||
/
|
||||
|
||||
|
||||
CREATE PROPERTY GRAPH my_graph
|
||||
VERTEX TABLES (ENTITIES
|
||||
KEY (ID)
|
||||
LABEL ENTITIES
|
||||
PROPERTIES (NAME))
|
||||
EDGE TABLES (RELATIONS
|
||||
KEY (ID)
|
||||
SOURCE KEY (SOURCE_ID) REFERENCES ENTITIES(ID)
|
||||
DESTINATION KEY (TARGET_ID) REFERENCES ENTITIES(ID)
|
||||
LABEL RELATIONS
|
||||
PROPERTIES (RELATION_TYPE, SOURCE_TEXT))
|
||||
options (PG_PGQL)
|
||||
|
||||
-- Drope o índice antigo, se necessário
|
||||
DROP INDEX ent_name_text_idx;
|
||||
DROP INDEX rel_type_text_idx;
|
||||
|
||||
-- Crie com o tipo correto
|
||||
CREATE INDEX ent_name_text_idx ON ENTITIES(NAME) INDEXTYPE IS CTXSYS.CONTEXT;
|
||||
CREATE INDEX rel_type_text_idx ON RELATIONS(RELATION_TYPE) INDEXTYPE IS CTXSYS.CONTEXT;
|
||||
|
||||
EXEC CTX_DDL.SYNC_INDEX('ENT_NAME_TEXT_IDX');
|
||||
EXEC CTX_DDL.SYNC_INDEX('REL_TYPE_TEXT_IDX');
|
||||
|
||||
SELECT from_entity,
|
||||
relation_type,
|
||||
to_entity
|
||||
FROM GRAPH_TABLE(
|
||||
my_graph
|
||||
MATCH (e1 is ENTITIES)-[r is RELATIONS]->(e2 is ENTITIES)
|
||||
WHERE CONTAINS(LOWER(e1.name), 'gateway') > 0
|
||||
OR CONTAINS(LOWER(e2.name), 'gateway') > 0
|
||||
OR CONTAINS(LOWER(r.RELATION_TYPE), 'gateway') > 0
|
||||
COLUMNS (
|
||||
e1.name AS from_entity, r.RELATION_TYPE AS relation_type, e2.name AS to_entity
|
||||
)
|
||||
)
|
||||
FETCH FIRST 20 ROWS ONLY
|
||||
|
||||
---------------
|
||||
-- # 2026-01-29 - VECTOR 23ai
|
||||
|
||||
CREATE TABLE rag_docs (
|
||||
id NUMBER GENERATED BY DEFAULT AS IDENTITY,
|
||||
content CLOB,
|
||||
source VARCHAR2(1000),
|
||||
chunk_hash VARCHAR2(64),
|
||||
status VARCHAR2(20),
|
||||
embed VECTOR(1024)
|
||||
);
|
||||
|
||||
CREATE VECTOR INDEX rag_docs_idx
|
||||
ON rag_docs(embed)
|
||||
ORGANIZATION HNSW
|
||||
DISTANCE COSINE;
|
||||
|
||||
-------------------
|
||||
-- #2026-02-07 - app_users
|
||||
|
||||
DROP TABLE app_users;
|
||||
|
||||
CREATE TABLE app_users (
|
||||
id NUMBER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
|
||||
|
||||
username VARCHAR2(100) UNIQUE,
|
||||
name VARCHAR2(200),
|
||||
email VARCHAR2(200) UNIQUE,
|
||||
|
||||
user_role VARCHAR2(50),
|
||||
|
||||
password_hash VARCHAR2(300),
|
||||
|
||||
active NUMBER(1) DEFAULT 1,
|
||||
|
||||
reset_token VARCHAR2(300),
|
||||
reset_expire TIMESTAMP,
|
||||
|
||||
must_change_password NUMBER(1) DEFAULT 0,
|
||||
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
|
||||
CREATE INDEX idx_users_email ON app_users(email);
|
||||
CREATE INDEX idx_users_token ON app_users(reset_token);
|
||||
286
files/process_excel_rfp.py
Normal file
286
files/process_excel_rfp.py
Normal file
@@ -0,0 +1,286 @@
|
||||
import pandas as pd
|
||||
import requests
|
||||
import json
|
||||
from pathlib import Path
|
||||
import os
|
||||
import re
|
||||
|
||||
# =========================
|
||||
# Configurações
|
||||
# =========================
|
||||
EXCEL_PATH = "<YOUR_EXCEL_XLSX_FILE>"
|
||||
API_URL = "http://demo-orcl-api-ai.hoshikawa.com.br:8101/rest/chat"
|
||||
QUERY_LOG_FILE = Path("queries_with_low_confidence_or_no.txt")
|
||||
TIMEOUT = 120
|
||||
APP_USER = os.environ.get("APP_USER", "<YOUR_USER_NAME>")
|
||||
APP_PASS = os.environ.get("APP_PASS", "<YOUR_PASSWORD>")
|
||||
|
||||
CONTEXT_COLUMNS = [1, 2] # USE IF YOU HAVE A NON-HIERARQUICAL STRUCTURE
|
||||
ORDER_COLUMN = 0 # WHERE ARE YOUR ORDER LINE COLUMN
|
||||
QUESTION_COLUMN = 4 # WHERE ARE YOUR QUESTION/TEXT to submit to RFP AI
|
||||
ALLOWED_STRUCTURES = [
|
||||
"x.x",
|
||||
"x.x.x",
|
||||
"x.x.x.x",
|
||||
"x.x.x.x.x",
|
||||
"x.x.x.x.x.x"
|
||||
]
|
||||
ALLOWED_SEPARATORS = [".", "-", "/", "_", ">"]
|
||||
|
||||
ANSWER_COL = "ANSWER" # NAME YOUR COLUMN for the YES/NO/PARTIAL result
|
||||
JSON_COL = "RESULT_JSON" # NAME YOUR COLUMN for the RFP AI automation results
|
||||
|
||||
CONFIDENCE_COL = "CONFIDENCE"
|
||||
AMBIGUITY_COL = "AMBIGUITY"
|
||||
CONF_REASON_COL = "CONFIDENCE_REASON"
|
||||
JUSTIFICATION_COL = "JUSTIFICATION"
|
||||
|
||||
# =========================
|
||||
# Helpers
|
||||
# =========================
|
||||
|
||||
def normalize_structure(num: str, separators: list[str]) -> str:
|
||||
if not num:
|
||||
return ""
|
||||
|
||||
pattern = "[" + re.escape("".join(separators)) + "]"
|
||||
return re.sub(pattern, ".", num.strip())
|
||||
|
||||
def should_process(num: str, allowed_patterns: list[str], separators: list[str]) -> bool:
|
||||
normalized = normalize_structure(num, separators)
|
||||
|
||||
if not is_hierarchical(normalized):
|
||||
return True
|
||||
|
||||
depth = normalized.count(".") + 1
|
||||
|
||||
allowed_depths = {
|
||||
pattern.count(".") + 1
|
||||
for pattern in allowed_patterns
|
||||
}
|
||||
|
||||
return depth in allowed_depths
|
||||
|
||||
def register_failed_query(query: str, answer: str, confidence: str):
|
||||
QUERY_LOG_FILE.parent.mkdir(parents=True, exist_ok=True)
|
||||
print("Negative/Doubt result")
|
||||
with QUERY_LOG_FILE.open("a", encoding="utf-8") as f:
|
||||
f.write("----------------------------\n")
|
||||
f.write(f"Query:\n{query}\n\n")
|
||||
f.write(f"Answer: {answer}\n")
|
||||
f.write(f"Confidence: {confidence}\n\n")
|
||||
|
||||
def normalize_num(num: str) -> str:
|
||||
return num.strip().rstrip(".")
|
||||
|
||||
def build_question_from_columns(row, context_cols: list[int], question_col: int) -> str:
|
||||
context_parts = []
|
||||
|
||||
for col in context_cols:
|
||||
value = str(row.iloc[col]).strip()
|
||||
if value:
|
||||
context_parts.append(value)
|
||||
|
||||
question = str(row.iloc[question_col]).strip()
|
||||
|
||||
if not context_parts:
|
||||
return question
|
||||
|
||||
context = " > ".join(dict.fromkeys(context_parts))
|
||||
return f'Considering the context of "{context}", {question}'
|
||||
|
||||
def build_question(hierarchy: dict, current_num: str) -> str:
|
||||
if not is_hierarchical(current_num):
|
||||
return hierarchy[current_num]["text"]
|
||||
|
||||
parts = current_num.split(".")
|
||||
|
||||
main_subject = None
|
||||
main_key = None
|
||||
|
||||
# ancestral mais alto existente
|
||||
for i in range(1, len(parts) + 1):
|
||||
key = ".".join(parts[:i])
|
||||
if key in hierarchy:
|
||||
main_subject = hierarchy[key]["text"]
|
||||
main_key = key
|
||||
break
|
||||
|
||||
if not main_subject:
|
||||
raise ValueError(f"No valid root subject for {current_num}")
|
||||
|
||||
subtopics = []
|
||||
for i in range(1, len(parts)):
|
||||
key = ".".join(parts[: i + 1])
|
||||
if key in hierarchy and key != main_key:
|
||||
subtopics.append(hierarchy[key]["text"])
|
||||
|
||||
specific = hierarchy[current_num]["text"]
|
||||
|
||||
if subtopics:
|
||||
context = " > ".join(subtopics)
|
||||
return (
|
||||
f'Considering the context of "{context}"'
|
||||
)
|
||||
|
||||
return f'What is the {specific} of {main_subject}?'
|
||||
|
||||
def normalize_api_response(api_response: dict) -> dict:
|
||||
if isinstance(api_response, dict) and "result" in api_response and isinstance(api_response["result"], dict):
|
||||
if "answer" in api_response["result"]:
|
||||
return api_response["result"]
|
||||
return api_response
|
||||
|
||||
def call_api(question: str) -> dict:
|
||||
payload = {"question": question}
|
||||
|
||||
response = requests.post(
|
||||
API_URL,
|
||||
json=payload,
|
||||
auth=(APP_USER, APP_PASS), # 🔐 BASIC AUTH
|
||||
timeout=TIMEOUT
|
||||
)
|
||||
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
|
||||
def is_explicit_url(source: str) -> bool:
|
||||
return source.startswith("http://") or source.startswith("https://")
|
||||
|
||||
def is_hierarchical(num: str) -> bool:
|
||||
return bool(
|
||||
num
|
||||
and "." in num
|
||||
and all(p.isdigit() for p in num.split("."))
|
||||
)
|
||||
|
||||
def normalize_evidence_sources(evidence: list[dict]) -> list[dict]:
|
||||
normalized = []
|
||||
|
||||
for ev in evidence:
|
||||
source = ev.get("source", "").strip()
|
||||
quote = ev.get("quote", "").strip()
|
||||
|
||||
if is_explicit_url(source):
|
||||
normalized.append(ev)
|
||||
continue
|
||||
|
||||
normalized.append({
|
||||
"quote": quote,
|
||||
"source": source or "Oracle Cloud Infrastructure documentation"
|
||||
})
|
||||
|
||||
return normalized
|
||||
|
||||
# =========================
|
||||
# Main
|
||||
# =========================
|
||||
def main():
|
||||
df = pd.read_excel(EXCEL_PATH, dtype=str).fillna("")
|
||||
|
||||
if ANSWER_COL not in df.columns:
|
||||
df[ANSWER_COL] = ""
|
||||
|
||||
if JSON_COL not in df.columns:
|
||||
df[JSON_COL] = ""
|
||||
|
||||
for col in [
|
||||
ANSWER_COL,
|
||||
JSON_COL,
|
||||
CONFIDENCE_COL,
|
||||
AMBIGUITY_COL,
|
||||
CONF_REASON_COL,
|
||||
JUSTIFICATION_COL
|
||||
]:
|
||||
if col not in df.columns:
|
||||
df[col] = ""
|
||||
|
||||
hierarchy = {}
|
||||
for idx, row in df.iterrows():
|
||||
num = normalize_num(str(row.iloc[ORDER_COLUMN]))
|
||||
text = str(row.iloc[QUESTION_COLUMN]).strip()
|
||||
|
||||
if num and text:
|
||||
hierarchy[num] = {
|
||||
"text": text,
|
||||
"row": idx
|
||||
}
|
||||
|
||||
for num, info in hierarchy.items():
|
||||
if not should_process(num, ALLOWED_STRUCTURES, ALLOWED_SEPARATORS):
|
||||
print(f"⏭️ SKIPPED (structure not allowed): {num}")
|
||||
continue
|
||||
|
||||
try:
|
||||
row = df.loc[info["row"]]
|
||||
num = normalize_num(str(row.iloc[ORDER_COLUMN]))
|
||||
|
||||
if is_hierarchical(num):
|
||||
question = build_question(hierarchy, num)
|
||||
else:
|
||||
question = build_question_from_columns(
|
||||
row,
|
||||
CONTEXT_COLUMNS,
|
||||
QUESTION_COLUMN
|
||||
)
|
||||
|
||||
print(f"\n❓ QUESTION SENT TO API:\n{question}")
|
||||
|
||||
api_response_raw = call_api(question)
|
||||
api_response = normalize_api_response(api_response_raw)
|
||||
|
||||
if "evidence" in api_response:
|
||||
api_response["evidence"] = normalize_evidence_sources(
|
||||
api_response.get("evidence", [])
|
||||
)
|
||||
|
||||
if (
|
||||
api_response.get("answer") == "NO"
|
||||
or api_response.get("confidence") in ("MEDIUM", "LOW")
|
||||
):
|
||||
register_failed_query(
|
||||
query=question,
|
||||
answer=api_response.get("answer", ""),
|
||||
confidence=api_response.get("confidence", "")
|
||||
)
|
||||
|
||||
print("📄 JSON RESPONSE (normalized):")
|
||||
print(json.dumps(api_response, ensure_ascii=False, indent=2))
|
||||
print("-" * 80)
|
||||
|
||||
df.at[info["row"], ANSWER_COL] = api_response.get("answer", "ERROR")
|
||||
df.at[info["row"], CONFIDENCE_COL] = api_response.get("confidence", "")
|
||||
df.at[info["row"], AMBIGUITY_COL] = str(api_response.get("ambiguity_detected", ""))
|
||||
df.at[info["row"], CONF_REASON_COL] = api_response.get("confidence_reason", "")
|
||||
df.at[info["row"], JUSTIFICATION_COL] = api_response.get("justification", "")
|
||||
|
||||
df.at[info["row"], JSON_COL] = json.dumps(api_response, ensure_ascii=False)
|
||||
|
||||
except Exception as e:
|
||||
error_json = {
|
||||
"answer": "ERROR",
|
||||
"confidence": "LOW",
|
||||
"ambiguity_detected": True,
|
||||
"confidence_reason": "Processing error",
|
||||
"justification": str(e),
|
||||
"evidence": []
|
||||
}
|
||||
|
||||
df.at[info["row"], ANSWER_COL] = "ERROR"
|
||||
df.at[info["row"], CONFIDENCE_COL] = "LOW"
|
||||
df.at[info["row"], AMBIGUITY_COL] = "True"
|
||||
df.at[info["row"], CONF_REASON_COL] = "Processing error"
|
||||
df.at[info["row"], JUSTIFICATION_COL] = str(e)
|
||||
df.at[info["row"], JSON_COL] = json.dumps(error_json, ensure_ascii=False)
|
||||
|
||||
print(f"❌ ERROR processing item {num}: {e}")
|
||||
|
||||
output_path = Path(EXCEL_PATH).with_name(
|
||||
Path(EXCEL_PATH).stem + "_result.xlsx"
|
||||
)
|
||||
df.to_excel(output_path, index=False)
|
||||
|
||||
print(f"\n✅ Saved in: {output_path}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
428
files/rfp_process.py
Normal file
428
files/rfp_process.py
Normal file
@@ -0,0 +1,428 @@
|
||||
import pandas as pd
|
||||
import requests
|
||||
import json
|
||||
from pathlib import Path
|
||||
import os
|
||||
import re
|
||||
import logging
|
||||
from config_loader import load_config
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
import time
|
||||
from queue import Queue
|
||||
import threading
|
||||
from oci_genai_llm_graphrag_rerank_rfp import answer_question
|
||||
|
||||
config = load_config()
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s | %(levelname)s | %(name)s | %(message)s"
|
||||
)
|
||||
|
||||
EXCEL_QUEUE = Queue()
|
||||
|
||||
# =========================
|
||||
# Configurações
|
||||
# =========================
|
||||
API_URL = "http://127.0.0.1:" + str(config.service_port) + "/chat"
|
||||
QUERY_LOG_FILE = Path("queries_with_low_confidence_or_no.txt")
|
||||
|
||||
CONTEXT_COLUMNS = [1, 2] # USE IF YOU HAVE A NON-HIERARQUICAL STRUCTURE
|
||||
ORDER_COLUMN = 0 # WHERE ARE YOUR ORDER LINE COLUMN
|
||||
QUESTION_COLUMN = 4 # WHERE ARE YOUR QUESTION/TEXT to submit to RFP AI
|
||||
ALLOWED_STRUCTURES = [
|
||||
"x.x",
|
||||
"x.x.x",
|
||||
"x.x.x.x",
|
||||
"x.x.x.x.x",
|
||||
"x.x.x.x.x.x"
|
||||
]
|
||||
ALLOWED_SEPARATORS = [".", "-", "/", "_", ">"]
|
||||
|
||||
ANSWER_COL = "ANSWER" # NAME YOUR COLUMN for the YES/NO/PARTIAL result
|
||||
JSON_COL = "RESULT_JSON" # NAME YOUR COLUMN for the RFP AI automation results
|
||||
ARCH_PLAN_COL = "ARCH_PLAN"
|
||||
MERMAID_COL = "MERMAID"
|
||||
|
||||
CONFIDENCE_COL = "CONFIDENCE"
|
||||
AMBIGUITY_COL = "AMBIGUITY"
|
||||
CONF_REASON_COL = "CONFIDENCE_REASON"
|
||||
JUSTIFICATION_COL = "JUSTIFICATION"
|
||||
|
||||
# =========================
|
||||
# Helpers
|
||||
# =========================
|
||||
|
||||
def normalize_structure(num: str, separators: list[str]) -> str:
|
||||
if not num:
|
||||
return ""
|
||||
|
||||
pattern = "[" + re.escape("".join(separators)) + "]"
|
||||
return re.sub(pattern, ".", num.strip())
|
||||
|
||||
def should_process(num: str, allowed_patterns: list[str], separators: list[str]) -> bool:
|
||||
normalized = normalize_structure(num, separators)
|
||||
|
||||
if not is_hierarchical(normalized):
|
||||
return True
|
||||
|
||||
depth = normalized.count(".") + 1
|
||||
|
||||
allowed_depths = {
|
||||
pattern.count(".") + 1
|
||||
for pattern in allowed_patterns
|
||||
}
|
||||
|
||||
return depth in allowed_depths
|
||||
|
||||
def register_failed_query(query: str, answer: str, confidence: str):
|
||||
QUERY_LOG_FILE.parent.mkdir(parents=True, exist_ok=True)
|
||||
logger.info("Negative/Doubt result")
|
||||
with QUERY_LOG_FILE.open("a", encoding="utf-8") as f:
|
||||
f.write("----------------------------\n")
|
||||
f.write(f"Query:\n{query}\n\n")
|
||||
f.write(f"Answer: {answer}\n")
|
||||
f.write(f"Confidence: {confidence}\n\n")
|
||||
|
||||
def normalize_num(num: str) -> str:
|
||||
return num.strip().rstrip(".")
|
||||
|
||||
def build_question_from_columns(row, context_cols: list[int], question_col: int) -> str:
|
||||
context_parts = []
|
||||
|
||||
for col in context_cols:
|
||||
value = str(row.iloc[col]).strip()
|
||||
if value:
|
||||
context_parts.append(value)
|
||||
|
||||
question = str(row.iloc[question_col]).strip()
|
||||
|
||||
if not context_parts:
|
||||
return question
|
||||
|
||||
context = " > ".join(dict.fromkeys(context_parts))
|
||||
return f'Considering the context of "{context}", {question}'
|
||||
|
||||
def build_question(hierarchy: dict, current_num: str) -> str:
|
||||
if not is_hierarchical(current_num):
|
||||
return hierarchy[current_num]["text"]
|
||||
|
||||
parts = current_num.split(".")
|
||||
|
||||
main_subject = None
|
||||
main_key = None
|
||||
|
||||
# ancestral mais alto existente
|
||||
for i in range(1, len(parts) + 1):
|
||||
key = ".".join(parts[:i])
|
||||
if key in hierarchy:
|
||||
main_subject = hierarchy[key]["text"]
|
||||
main_key = key
|
||||
break
|
||||
|
||||
if not main_subject:
|
||||
raise ValueError(f"No valid root subject for {current_num}")
|
||||
|
||||
subtopics = []
|
||||
for i in range(1, len(parts)):
|
||||
key = ".".join(parts[: i + 1])
|
||||
if key in hierarchy and key != main_key:
|
||||
subtopics.append(hierarchy[key]["text"])
|
||||
|
||||
specific = hierarchy[current_num]["text"]
|
||||
|
||||
if subtopics:
|
||||
context = " > ".join(subtopics)
|
||||
return (
|
||||
f'Considering the context of "{context}"'
|
||||
)
|
||||
|
||||
return f'What is the {specific} of {main_subject}?'
|
||||
|
||||
def normalize_api_response(api_response) -> dict:
|
||||
# --------------------------------
|
||||
# 🔥 STRING → JSON
|
||||
# --------------------------------
|
||||
if isinstance(api_response, str):
|
||||
try:
|
||||
api_response = json.loads(api_response)
|
||||
except Exception:
|
||||
return {"error": f"Invalid string response: {api_response[:300]}"}
|
||||
|
||||
if not isinstance(api_response, dict):
|
||||
return {"error": f"Invalid type: {type(api_response)}"}
|
||||
|
||||
if "error" in api_response:
|
||||
return api_response
|
||||
|
||||
if isinstance(api_response.get("result"), dict):
|
||||
return api_response["result"]
|
||||
|
||||
if "answer" in api_response:
|
||||
return api_response
|
||||
|
||||
return {"error": f"Unexpected format: {str(api_response)[:300]}"}
|
||||
|
||||
def call_api(
|
||||
question: str,
|
||||
*,
|
||||
api_url: str,
|
||||
timeout: int,
|
||||
auth_user: str | None,
|
||||
auth_pass: str | None,
|
||||
) -> dict:
|
||||
|
||||
payload = {"question": question}
|
||||
|
||||
response = requests.post(
|
||||
api_url,
|
||||
json=payload,
|
||||
auth=(auth_user, auth_pass) if auth_user else None,
|
||||
timeout=timeout
|
||||
)
|
||||
|
||||
if response.status_code >= 500:
|
||||
raise RuntimeError(
|
||||
f"Server error {response.status_code}: {response.text}",
|
||||
response=response
|
||||
)
|
||||
|
||||
text = response.text.lower()
|
||||
|
||||
if "gateway time" in text or "timeout" in text:
|
||||
raise RuntimeError(response.text)
|
||||
|
||||
try:
|
||||
return response.json()
|
||||
except:
|
||||
raise RuntimeError(
|
||||
f"Invalid JSON: {response.text[:300]}"
|
||||
)
|
||||
|
||||
def is_explicit_url(source: str) -> bool:
|
||||
return source.startswith("http://") or source.startswith("https://")
|
||||
|
||||
def is_hierarchical(num: str) -> bool:
|
||||
return bool(
|
||||
num
|
||||
and "." in num
|
||||
and all(p.isdigit() for p in num.split("."))
|
||||
)
|
||||
|
||||
def normalize_evidence_sources(evidence: list[dict]) -> list[dict]:
|
||||
normalized = []
|
||||
|
||||
for ev in evidence:
|
||||
source = ev.get("source", "").strip()
|
||||
quote = ev.get("quote", "").strip()
|
||||
|
||||
if is_explicit_url(source):
|
||||
normalized.append(ev)
|
||||
continue
|
||||
|
||||
normalized.append({
|
||||
"quote": quote,
|
||||
"source": source or "Oracle Cloud Infrastructure documentation"
|
||||
})
|
||||
|
||||
return normalized
|
||||
|
||||
def build_justification_with_links(justification: str, evidence: list[dict]) -> str:
|
||||
"""
|
||||
Combine justification text + evidence URLs in a readable format for Excel.
|
||||
"""
|
||||
|
||||
if not evidence:
|
||||
return justification or ""
|
||||
|
||||
urls = []
|
||||
|
||||
for ev in evidence:
|
||||
src = ev.get("source", "").strip()
|
||||
if is_explicit_url(src):
|
||||
urls.append(src)
|
||||
|
||||
if not urls:
|
||||
return justification or ""
|
||||
|
||||
links_text = "\n".join(f"- {u}" for u in sorted(set(urls)))
|
||||
|
||||
if justification:
|
||||
return f"{justification}\n\nSources:\n{links_text}"
|
||||
|
||||
return f"Sources:\n{links_text}"
|
||||
|
||||
def call_api_with_retry(question, max_minutes=30, **kwargs):
|
||||
start = time.time()
|
||||
attempt = 0
|
||||
delay = 5
|
||||
|
||||
while True:
|
||||
try:
|
||||
return call_api(question, **kwargs)
|
||||
|
||||
except Exception as e:
|
||||
attempt += 1
|
||||
elapsed = time.time() - start
|
||||
|
||||
msg = str(e).lower()
|
||||
if any(x in msg for x in ["401", "403", "400", "invalid json format"]):
|
||||
raise
|
||||
|
||||
if elapsed > max_minutes * 60:
|
||||
raise RuntimeError(
|
||||
f"Timeout after {attempt} attempts / {int(elapsed)}s"
|
||||
)
|
||||
|
||||
logger.info(
|
||||
f"🔁 Retry {attempt} | waiting {delay}s | {e}"
|
||||
)
|
||||
|
||||
time.sleep(delay)
|
||||
|
||||
delay = min(delay * 1.5, 60)
|
||||
|
||||
def call_local_engine(question: str) -> dict:
|
||||
return answer_question(question)
|
||||
|
||||
# =========================
|
||||
# Main
|
||||
# =========================
|
||||
def process_excel_rfp(
|
||||
input_excel: Path,
|
||||
output_excel: Path,
|
||||
*,
|
||||
api_url: str,
|
||||
timeout: int = 120,
|
||||
auth_user: str | None = None,
|
||||
auth_pass: str | None = None,
|
||||
) -> Path:
|
||||
|
||||
df = pd.read_excel(input_excel, dtype=str).fillna("")
|
||||
|
||||
for col in [
|
||||
ANSWER_COL,
|
||||
JSON_COL,
|
||||
CONFIDENCE_COL,
|
||||
AMBIGUITY_COL,
|
||||
CONF_REASON_COL,
|
||||
JUSTIFICATION_COL
|
||||
]:
|
||||
if col not in df.columns:
|
||||
df[col] = ""
|
||||
|
||||
hierarchy = {}
|
||||
for idx, row in df.iterrows():
|
||||
num = normalize_num(str(row.iloc[ORDER_COLUMN]))
|
||||
text = str(row.iloc[QUESTION_COLUMN]).strip()
|
||||
|
||||
if num and text:
|
||||
hierarchy[num] = {"text": text, "row": idx}
|
||||
|
||||
# =========================================
|
||||
# 🔥 WORKER PARALELO
|
||||
# =========================================
|
||||
def process_row(num, info):
|
||||
try:
|
||||
row = df.loc[info["row"]]
|
||||
|
||||
if is_hierarchical(num):
|
||||
question = build_question(hierarchy, num)
|
||||
else:
|
||||
question = build_question_from_columns(
|
||||
row,
|
||||
CONTEXT_COLUMNS,
|
||||
QUESTION_COLUMN
|
||||
)
|
||||
|
||||
logger.info(f"\n🔸 QUESTION {num} SENT TO API:\n{question}")
|
||||
|
||||
# raw = call_api_with_retry(
|
||||
# question,
|
||||
# api_url=api_url,
|
||||
# timeout=timeout,
|
||||
# auth_user=auth_user,
|
||||
# auth_pass=auth_pass
|
||||
# )
|
||||
raw = call_local_engine(question)
|
||||
|
||||
resp = normalize_api_response(raw)
|
||||
|
||||
return info["row"], question, resp
|
||||
|
||||
except Exception as e:
|
||||
return info["row"], "", {"error": str(e)}
|
||||
|
||||
# =========================================
|
||||
# PARALLEL EXECUTION - FUTURE - OCI ACCEPTS ONLY 1 HERE
|
||||
# =========================================
|
||||
futures = []
|
||||
|
||||
with ThreadPoolExecutor(max_workers=1) as executor:
|
||||
|
||||
for num, info in hierarchy.items():
|
||||
|
||||
if not should_process(num, ALLOWED_STRUCTURES, ALLOWED_SEPARATORS):
|
||||
continue
|
||||
|
||||
futures.append(executor.submit(process_row, num, info))
|
||||
|
||||
for f in as_completed(futures):
|
||||
|
||||
row_idx, question, api_response = f.result()
|
||||
api_response = normalize_api_response(api_response)
|
||||
|
||||
try:
|
||||
if "error" in api_response:
|
||||
raise Exception(api_response["error"])
|
||||
|
||||
if "evidence" in api_response:
|
||||
api_response["evidence"] = normalize_evidence_sources(
|
||||
api_response["evidence"]
|
||||
)
|
||||
|
||||
if (
|
||||
api_response.get("answer") == "NO"
|
||||
or api_response.get("confidence") in ("MEDIUM", "LOW")
|
||||
):
|
||||
register_failed_query(
|
||||
query=question,
|
||||
answer=api_response.get("answer", ""),
|
||||
confidence=api_response.get("confidence", "")
|
||||
)
|
||||
|
||||
df.at[row_idx, ANSWER_COL] = api_response.get("answer", "ERROR")
|
||||
df.at[row_idx, CONFIDENCE_COL] = api_response.get("confidence", "")
|
||||
df.at[row_idx, AMBIGUITY_COL] = str(api_response.get("ambiguity_detected", ""))
|
||||
df.at[row_idx, CONF_REASON_COL] = api_response.get("confidence_reason", "")
|
||||
df.at[row_idx, JUSTIFICATION_COL] = build_justification_with_links(
|
||||
api_response.get("justification", ""),
|
||||
api_response.get("evidence", [])
|
||||
)
|
||||
df.at[row_idx, JSON_COL] = json.dumps(api_response, ensure_ascii=False)
|
||||
|
||||
logger.info(json.dumps(api_response, indent=2))
|
||||
|
||||
except Exception as e:
|
||||
df.at[row_idx, ANSWER_COL] = "ERROR"
|
||||
df.at[row_idx, CONFIDENCE_COL] = "LOW"
|
||||
df.at[row_idx, JUSTIFICATION_COL] = str(e)
|
||||
|
||||
logger.info(f"❌ ERROR: {e}")
|
||||
|
||||
df.to_excel(output_excel, index=False)
|
||||
|
||||
return output_excel
|
||||
|
||||
if __name__ == "__main__":
|
||||
import sys
|
||||
|
||||
input_path = Path(sys.argv[1])
|
||||
output_path = input_path.with_name(input_path.stem + "_result.xlsx")
|
||||
|
||||
process_excel_rfp(
|
||||
input_excel=input_path,
|
||||
output_excel=output_path,
|
||||
api_url=API_URL,
|
||||
)
|
||||
BIN
files/source_code.zip
Normal file
BIN
files/source_code.zip
Normal file
Binary file not shown.
67
files/templates/admin_menu.html
Normal file
67
files/templates/admin_menu.html
Normal file
@@ -0,0 +1,67 @@
|
||||
{% extends "base.html" %}
|
||||
{% block content %}
|
||||
|
||||
<h1>⚙️ Admin Panel</h1>
|
||||
|
||||
<!-- USERS -->
|
||||
<div class="card">
|
||||
|
||||
<h2>👤 Users</h2>
|
||||
|
||||
<p class="small">
|
||||
Create, edit and manage system users and permissions.
|
||||
</p>
|
||||
|
||||
<a href="{{ url_for('users.list_users') }}" class="btn">
|
||||
Open User Management
|
||||
</a>
|
||||
|
||||
</div>
|
||||
|
||||
|
||||
<!-- KNOWLEDGE -->
|
||||
<div class="card">
|
||||
|
||||
<h2>🔐 Knowledge Governance</h2>
|
||||
|
||||
<p class="small">
|
||||
Invalidate outdated knowledge or manually add validated information to the RAG base.
|
||||
</p>
|
||||
|
||||
<a href="{{ url_for('admin.invalidate_page') }}" class="btn">
|
||||
Open Governance Tools
|
||||
</a>
|
||||
|
||||
</div>
|
||||
|
||||
<div class="card">
|
||||
|
||||
<h2>♻️ Maintenance</h2>
|
||||
|
||||
<p class="small">
|
||||
Reload all knowledge indexes, embeddings and caches without restarting the server.
|
||||
</p>
|
||||
|
||||
<button class="btn" onclick="rebootSystem()">
|
||||
Reload Knowledge
|
||||
</button>
|
||||
|
||||
<pre id="rebootResult" style="margin-top:10px;"></pre>
|
||||
|
||||
</div>
|
||||
<script>
|
||||
async function rebootSystem() {
|
||||
|
||||
const box = document.getElementById("rebootResult");
|
||||
box.textContent = "⏳ Reloading...";
|
||||
|
||||
const res = await fetch("/admin/reboot", {
|
||||
method: "POST"
|
||||
});
|
||||
|
||||
const data = await res.json();
|
||||
|
||||
box.textContent = "✅ " + data.message;
|
||||
}
|
||||
</script>
|
||||
{% endblock %}
|
||||
321
files/templates/base.html
Normal file
321
files/templates/base.html
Normal file
@@ -0,0 +1,321 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
||||
<title>ORACLE RFP AI Platform</title>
|
||||
|
||||
<style>
|
||||
/* =========================
|
||||
GLOBAL THEME (Oracle light)
|
||||
========================= */
|
||||
|
||||
*,
|
||||
*::before,
|
||||
*::after {
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
body {
|
||||
margin: 0;
|
||||
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Arial, sans-serif;
|
||||
|
||||
/* antes: gradiente dark */
|
||||
background: #f5f6f7;
|
||||
|
||||
/* antes: texto claro */
|
||||
color: #1f2937;
|
||||
|
||||
min-height: 100vh;
|
||||
}
|
||||
|
||||
/* =========================
|
||||
LINKS
|
||||
========================= */
|
||||
|
||||
a {
|
||||
color: #C74634; /* Oracle red */
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
a:hover {
|
||||
opacity: 0.85;
|
||||
}
|
||||
|
||||
/* =========================
|
||||
NAVBAR
|
||||
========================= */
|
||||
|
||||
.navbar {
|
||||
position: fixed; /* 🔥 cola no topo real */
|
||||
top: 0;
|
||||
left: 0;
|
||||
|
||||
width: 100%;
|
||||
|
||||
background: #E30613;
|
||||
|
||||
z-index: 9999;
|
||||
|
||||
box-shadow: 0 2px 8px rgba(0,0,0,0.15);
|
||||
}
|
||||
|
||||
.navbar-inner {
|
||||
width: 100%;
|
||||
|
||||
/* 🔥 padding mínimo só vertical */
|
||||
padding: 14px 18px;
|
||||
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
}
|
||||
|
||||
/* texto branco */
|
||||
.navbar a,
|
||||
.nav-left {
|
||||
color: white;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.nav-links {
|
||||
display: flex;
|
||||
gap: 20px;
|
||||
}
|
||||
|
||||
.nav-left {
|
||||
font-weight: bold;
|
||||
font-size: 18px;
|
||||
|
||||
/* destaque Oracle */
|
||||
color: #FFFFFF;
|
||||
}
|
||||
|
||||
.nav-links {
|
||||
display: flex;
|
||||
gap: 18px;
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
/* =========================
|
||||
LAYOUT
|
||||
========================= */
|
||||
|
||||
.container {
|
||||
max-width: 1400px;
|
||||
margin: 90px auto 40px auto; /* 🔥 espaço para navbar */
|
||||
padding: 0 24px;
|
||||
}
|
||||
|
||||
/* =========================
|
||||
CARD
|
||||
========================= */
|
||||
|
||||
.card {
|
||||
|
||||
/* antes: #1e293b */
|
||||
background: #ffffff;
|
||||
|
||||
border-radius: 14px;
|
||||
|
||||
padding: 24px;
|
||||
|
||||
/* antes sombra pesada */
|
||||
box-shadow: 0 4px 12px rgba(0,0,0,0.06);
|
||||
|
||||
border: 1px solid #e5e7eb;
|
||||
|
||||
margin-bottom: 24px;
|
||||
|
||||
}
|
||||
|
||||
/* =========================
|
||||
TABLE
|
||||
========================= */
|
||||
|
||||
table {
|
||||
width: 100%;
|
||||
border-collapse: collapse;
|
||||
}
|
||||
|
||||
th, td {
|
||||
padding: 10px 12px;
|
||||
border-bottom: 1px solid #e5e7eb;
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
th {
|
||||
color: #374151;
|
||||
}
|
||||
|
||||
/* =========================
|
||||
INPUTS
|
||||
========================= */
|
||||
|
||||
input, select, textarea {
|
||||
|
||||
/* antes fundo escuro */
|
||||
background: #ffffff;
|
||||
|
||||
border: 1px solid #e5e7eb;
|
||||
|
||||
color: #111;
|
||||
|
||||
padding: 10px;
|
||||
border-radius: 8px;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
textarea {
|
||||
resize: vertical;
|
||||
}
|
||||
|
||||
/* =========================
|
||||
BUTTONS
|
||||
========================= */
|
||||
|
||||
.btn {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
|
||||
background: #F80000;
|
||||
color: white;
|
||||
|
||||
padding: 10px 18px;
|
||||
border-radius: 8px;
|
||||
|
||||
border: none;
|
||||
cursor: pointer;
|
||||
|
||||
font-weight: 500;
|
||||
font-size: 14px;
|
||||
|
||||
text-decoration: none;
|
||||
line-height: 1;
|
||||
|
||||
gap: 6px;
|
||||
}
|
||||
|
||||
.btn:hover {
|
||||
opacity: 0.9;
|
||||
}
|
||||
|
||||
.btn-success {
|
||||
background: #16a34a;
|
||||
}
|
||||
|
||||
.btn-danger {
|
||||
background: #dc2626;
|
||||
}
|
||||
|
||||
/* =========================
|
||||
FLASH MESSAGES
|
||||
========================= */
|
||||
|
||||
.flash {
|
||||
padding: 10px;
|
||||
border-radius: 8px;
|
||||
margin-bottom: 14px;
|
||||
}
|
||||
|
||||
.flash-success {
|
||||
background: #e7f7ed;
|
||||
color: #166534;
|
||||
}
|
||||
|
||||
.flash-error {
|
||||
background: #fee2e2;
|
||||
color: #991b1b;
|
||||
}
|
||||
|
||||
/* =========================
|
||||
MOBILE
|
||||
========================= */
|
||||
|
||||
@media (max-width: 768px) {
|
||||
.container {
|
||||
margin: 20px 12px;
|
||||
}
|
||||
|
||||
table {
|
||||
font-size: 12px;
|
||||
}
|
||||
}
|
||||
|
||||
.logo {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 12px;
|
||||
font-weight: 600;
|
||||
font-size: 18px;
|
||||
color: white; /* texto branco também */
|
||||
}
|
||||
|
||||
.oracle-icon {
|
||||
width: 22px;
|
||||
height: 22px;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<!-- =========================
|
||||
NAVBAR
|
||||
========================= -->
|
||||
<div class="navbar">
|
||||
<div class="navbar-inner">
|
||||
|
||||
<div class="nav-left logo">
|
||||
<img
|
||||
src="{{ url_for('static', filename='oracle.webp') }}"
|
||||
class="oracle-icon"
|
||||
alt="Oracle"
|
||||
/>
|
||||
|
||||
<span>ORACLE RFP AI Platform</span>
|
||||
</div>
|
||||
|
||||
{% if request.endpoint != 'auth.login_page' %}
|
||||
<div class="nav-links">
|
||||
<a href="/">Chat</a>
|
||||
|
||||
{% if current_user and current_user.role == "admin" %}
|
||||
<a href="/admin">Admin</a>
|
||||
{% endif %}
|
||||
|
||||
{% if current_user %}
|
||||
<a href="/logout">Logout</a>
|
||||
{% else %}
|
||||
<a href="/login">Login</a>
|
||||
{% endif %}
|
||||
</div>
|
||||
{% endif %}
|
||||
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- =========================
|
||||
PAGE CONTENT
|
||||
========================= -->
|
||||
<div class="container">
|
||||
|
||||
{% with messages = get_flashed_messages(with_categories=true) %}
|
||||
{% if messages %}
|
||||
{% for category, message in messages %}
|
||||
<div class="flash flash-{{ category }}">
|
||||
{{ message }}
|
||||
</div>
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{% endwith %}
|
||||
|
||||
{% block content %}
|
||||
{% endblock %}
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
||||
82
files/templates/excel/job_status.html
Normal file
82
files/templates/excel/job_status.html
Normal file
@@ -0,0 +1,82 @@
|
||||
{% extends "base.html" %}
|
||||
|
||||
{% block content %}
|
||||
|
||||
<style>
|
||||
.job-card {
|
||||
max-width: 520px;
|
||||
margin: 80px auto;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.spinner {
|
||||
width: 40px;
|
||||
height: 40px;
|
||||
border: 4px solid #eee;
|
||||
border-top: 4px solid #E30613;
|
||||
border-radius: 50%;
|
||||
animation: spin 1s linear infinite;
|
||||
margin: 20px auto;
|
||||
}
|
||||
|
||||
@keyframes spin {
|
||||
100% { transform: rotate(360deg); }
|
||||
}
|
||||
</style>
|
||||
|
||||
<div class="card job-card">
|
||||
|
||||
<h2>Excel Processing</h2>
|
||||
<p>Job ID: <b>{{ job_id }}</b></p>
|
||||
|
||||
<div id="status-area">
|
||||
<div class="spinner"></div>
|
||||
<p>Processing...</p>
|
||||
</div>
|
||||
|
||||
<div id="download-area" style="display:none">
|
||||
<a id="download-btn" class="btn btn-success">Download Result</a>
|
||||
</div>
|
||||
|
||||
<div id="error-area" style="display:none; color:#dc2626">
|
||||
<p><b>Error occurred</b></p>
|
||||
<pre id="error-detail"></pre>
|
||||
<a href="/job/{{ job_id }}/logs" target="_blank">View logs</a>
|
||||
</div>
|
||||
|
||||
</div>
|
||||
|
||||
<script>
|
||||
const jobId = "{{ job_id }}";
|
||||
|
||||
async function checkStatus() {
|
||||
const r = await fetch(`/job/${jobId}/status`);
|
||||
const s = await r.json();
|
||||
|
||||
if (s.status === "DONE") {
|
||||
document.getElementById("status-area").style.display = "none";
|
||||
document.getElementById("download-area").style.display = "block";
|
||||
|
||||
document.getElementById("download-btn").href =
|
||||
`/download/${jobId}`;
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
if (s.status === "ERROR") {
|
||||
document.getElementById("status-area").style.display = "none";
|
||||
document.getElementById("error-area").style.display = "block";
|
||||
|
||||
document.getElementById("error-detail").innerText =
|
||||
s.detail || "Unknown error";
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
setTimeout(checkStatus, 2000);
|
||||
}
|
||||
|
||||
checkStatus();
|
||||
</script>
|
||||
|
||||
{% endblock %}
|
||||
1023
files/templates/index.html
Normal file
1023
files/templates/index.html
Normal file
File diff suppressed because it is too large
Load Diff
203
files/templates/invalidate.html
Normal file
203
files/templates/invalidate.html
Normal file
@@ -0,0 +1,203 @@
|
||||
{% extends "base.html" %}
|
||||
{% block content %}
|
||||
|
||||
<h1>🔐 RAG Knowledge Governance</h1>
|
||||
|
||||
<!-- ===================== -->
|
||||
<!-- 🔎 INVALIDATE SEARCH -->
|
||||
<!-- ===================== -->
|
||||
<section style="margin-bottom: 40px;">
|
||||
<h2>❌ Invalidate Knowledge</h2>
|
||||
|
||||
<form method="post" action="/admin/search">
|
||||
<textarea name="statement" rows="4" style="width:100%;"
|
||||
placeholder="Paste the invalid or outdated statement here">{{ statement }}</textarea>
|
||||
<br><br>
|
||||
<button type="submit">Search occurrences</button>
|
||||
</form>
|
||||
</section>
|
||||
|
||||
<hr>
|
||||
|
||||
<!-- ===================== -->
|
||||
<!-- ➕ MANUAL KNOWLEDGE -->
|
||||
<!-- ===================== -->
|
||||
<section style="margin-bottom: 40px;">
|
||||
<h2>➕ Add Manual Knowledge</h2>
|
||||
|
||||
<form id="manualForm">
|
||||
<textarea id="manualText" rows="8" style="width:100%;"
|
||||
placeholder="Paste validated Oracle technical information here..."></textarea>
|
||||
|
||||
<br><br>
|
||||
|
||||
<input type="text" id="reason" style="width:100%;"
|
||||
placeholder="Reason / justification (e.g. official Oracle clarification)"/>
|
||||
|
||||
<br><br>
|
||||
|
||||
<button type="button" onclick="submitManual()">Add Knowledge</button>
|
||||
</form>
|
||||
|
||||
<pre id="manualResult" style="margin-top:15px;"></pre>
|
||||
</section>
|
||||
|
||||
<hr>
|
||||
|
||||
<!-- ===================== -->
|
||||
<!-- 📚 SEARCH RESULTS -->
|
||||
<!-- ===================== -->
|
||||
<section>
|
||||
<h2>📚 Knowledge Matches</h2>
|
||||
|
||||
{% if results|length == 0 %}
|
||||
<p><i>No matching knowledge found.</i></p>
|
||||
{% endif %}
|
||||
|
||||
{% for r in results %}
|
||||
<div style="border:1px solid #ccc; padding:15px; margin:15px 0; border-radius:6px;">
|
||||
|
||||
<strong>Chunk Hash:</strong>
|
||||
<span data-hash-label>{{ r.chunk_hash or "—" }}</span><br>
|
||||
<strong>Origin:</strong> {{ r.origin or "UNKNOWN" }}<br>
|
||||
<strong>Created at:</strong> {{ r.created_at or "—" }}<br>
|
||||
<strong>Status:</strong> {{ r.status }}<br>
|
||||
<strong>Source:</strong> {{ r.source }}<br>
|
||||
|
||||
<div style="margin-top:10px;">
|
||||
<strong>Content:</strong>
|
||||
<pre style="
|
||||
white-space: pre-wrap;
|
||||
background:#f8f8f8;
|
||||
padding:10px;
|
||||
border-radius:4px;
|
||||
border:1px solid #ddd;
|
||||
max-height:400px;
|
||||
overflow:auto;
|
||||
">{{ r.text }}</pre>
|
||||
<div style="margin-top:10px;">
|
||||
<strong>Change to:</strong>
|
||||
|
||||
<textarea class="edit-box" data-hash="{{ r.chunk_hash }}"
|
||||
style="
|
||||
width:100%;
|
||||
min-height:140px;
|
||||
background:#f8f8f8;
|
||||
padding:10px;
|
||||
border-radius:4px;
|
||||
border:1px solid #ddd;
|
||||
font-family: monospace;
|
||||
"
|
||||
data-original="{{ r.text | e }}"
|
||||
>{{ r.text }}</textarea>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{% if r.chunk_hash %}
|
||||
<button
|
||||
style="color:red;"
|
||||
data-hash="{{ (r.chunk_hash or '') | string | e }}"
|
||||
onclick="invalidateChunk(this)">
|
||||
Invalidate
|
||||
</button>
|
||||
<button
|
||||
style="color:blue; margin-left:10px;"
|
||||
data-hash="{{ (r.chunk_hash or '') | string | e }}"
|
||||
onclick="updateChunk(this)">
|
||||
Change content
|
||||
</button>
|
||||
{% else %}
|
||||
<p><i>Derived from Knowledge Graph (non-revocable)</i></p>
|
||||
{% endif %}
|
||||
</div>
|
||||
{% endfor %}
|
||||
</section>
|
||||
|
||||
<script>
|
||||
async function submitManual() {
|
||||
const text = document.getElementById("manualText").value;
|
||||
const reason = document.getElementById("reason").value;
|
||||
|
||||
const res = await fetch("/admin/add-knowledge", {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
credentials: "same-origin",
|
||||
body: JSON.stringify({ text, reason })
|
||||
});
|
||||
|
||||
const data = await res.json();
|
||||
document.getElementById("manualResult").textContent = JSON.stringify(data, null, 2);
|
||||
}
|
||||
async function invalidateChunk(btn) {
|
||||
const chunkHash = btn.dataset.hash;
|
||||
console.log("chunkHash:", chunkHash, "type:", typeof chunkHash);
|
||||
|
||||
const res = await fetch("/admin/revoke", {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
credentials: "same-origin",
|
||||
body: JSON.stringify({
|
||||
chunk_hash: chunkHash,
|
||||
reason: "Invalidated via admin interface"
|
||||
})
|
||||
});
|
||||
|
||||
const data = await res.json().catch(() => ({}));
|
||||
console.log("server:", res.status, data);
|
||||
|
||||
if (res.ok) btn.closest("div").remove();
|
||||
}
|
||||
async function updateChunk(btn) {
|
||||
|
||||
const card = btn.closest("div");
|
||||
|
||||
const chunkHash = btn.dataset.hash;
|
||||
|
||||
const textarea = card.querySelector(`.edit-box[data-hash="${chunkHash}"]`);
|
||||
const newText = textarea.value;
|
||||
|
||||
if (!chunkHash) return;
|
||||
|
||||
if (!confirm("Update this chunk content?")) return;
|
||||
|
||||
btn.disabled = true;
|
||||
btn.textContent = "Saving...";
|
||||
|
||||
const res = await fetch("/admin/update-chunk", {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
credentials: "same-origin",
|
||||
body: JSON.stringify({
|
||||
chunk_hash: chunkHash,
|
||||
text: newText
|
||||
})
|
||||
});
|
||||
|
||||
const data = await res.json().catch(() => ({}));
|
||||
|
||||
btn.disabled = false;
|
||||
|
||||
if (res.ok) {
|
||||
const newHash = data.chunk_hash;
|
||||
|
||||
const hashLabel = card.querySelector("[data-hash-label]");
|
||||
if (hashLabel && newHash) {
|
||||
hashLabel.textContent = newHash;
|
||||
}
|
||||
|
||||
btn.dataset.hash = newHash;
|
||||
|
||||
textarea.style.borderColor = "green";
|
||||
|
||||
setTimeout(() => {
|
||||
btn.textContent = "Change content";
|
||||
textarea.style.borderColor = "#ddd";
|
||||
}, 1200);
|
||||
|
||||
} else {
|
||||
alert("Failed to update");
|
||||
btn.textContent = "Change content";
|
||||
}
|
||||
}
|
||||
</script>
|
||||
{% endblock %}
|
||||
25
files/templates/users/form.html
Normal file
25
files/templates/users/form.html
Normal file
@@ -0,0 +1,25 @@
|
||||
{% extends "base.html" %}
|
||||
{% block content %}
|
||||
|
||||
<div class="card">
|
||||
<h2>{{ "Edit User" if user else "New User" }}</h2>
|
||||
|
||||
<form method="post">
|
||||
<input name="name" placeholder="Name" value="{{ user.name if user else '' }}" required>
|
||||
<input name="email" placeholder="Email" value="{{ user.email if user else '' }}" required>
|
||||
|
||||
<select name="role">
|
||||
<option value="user">User</option>
|
||||
<option value="admin">Admin</option>
|
||||
</select>
|
||||
|
||||
<label>
|
||||
<input type="checkbox" name="active" {% if user and user.active %}checked{% endif %}>
|
||||
Active
|
||||
</label>
|
||||
|
||||
<button class="btn btn-success">Save</button>
|
||||
</form>
|
||||
</div>
|
||||
|
||||
{% endblock %}
|
||||
33
files/templates/users/list.html
Normal file
33
files/templates/users/list.html
Normal file
@@ -0,0 +1,33 @@
|
||||
{% extends "base.html" %}
|
||||
{% block content %}
|
||||
|
||||
<div class="card">
|
||||
<h2>Users</h2>
|
||||
|
||||
<a class="btn btn-primary" href="{{ url_for('users.new_user') }}">+ New User</a>
|
||||
|
||||
<table class="table table-dark">
|
||||
<tr>
|
||||
<th>Name</th>
|
||||
<th>Email</th>
|
||||
<th>Role</th>
|
||||
<th>Active</th>
|
||||
<th></th>
|
||||
</tr>
|
||||
|
||||
{% for u in users %}
|
||||
<tr>
|
||||
<td>{{ u.name }}</td>
|
||||
<td>{{ u.email }}</td>
|
||||
<td>{{ u.role }}</td>
|
||||
<td>{{ "Yes" if u.active else "No" }}</td>
|
||||
<td>
|
||||
<a href="{{ url_for('users.edit_user', user_id=u.id) }}">Edit</a> |
|
||||
<a href="{{ url_for('users.delete_user', user_id=u.id) }}">Delete</a>
|
||||
</td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
</table>
|
||||
</div>
|
||||
|
||||
{% endblock %}
|
||||
114
files/templates/users/login.html
Normal file
114
files/templates/users/login.html
Normal file
@@ -0,0 +1,114 @@
|
||||
{% extends "base.html" %}
|
||||
|
||||
{% block content %}
|
||||
|
||||
<style>
|
||||
.login-wrapper {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
height: calc(100vh - 120px);
|
||||
}
|
||||
|
||||
.login-card {
|
||||
width: 380px;
|
||||
background: #ffffff;
|
||||
border-radius: 14px;
|
||||
padding: 32px;
|
||||
|
||||
box-shadow: 0 10px 30px rgba(0,0,0,0.08);
|
||||
border: 1px solid #e5e7eb;
|
||||
}
|
||||
|
||||
.login-title {
|
||||
text-align: center;
|
||||
font-size: 20px;
|
||||
font-weight: 600;
|
||||
margin-bottom: 24px;
|
||||
color: #111827;
|
||||
}
|
||||
|
||||
.login-sub {
|
||||
text-align: center;
|
||||
font-size: 13px;
|
||||
color: #6b7280;
|
||||
margin-bottom: 22px;
|
||||
}
|
||||
|
||||
.login-card input {
|
||||
margin-bottom: 14px;
|
||||
}
|
||||
|
||||
.login-btn {
|
||||
width: 100%;
|
||||
margin-top: 8px;
|
||||
}
|
||||
|
||||
.login-footer {
|
||||
text-align: center;
|
||||
margin-top: 18px;
|
||||
font-size: 12px;
|
||||
color: #6b7280;
|
||||
}
|
||||
|
||||
.oracle-accent {
|
||||
height: 4px;
|
||||
width: 100%;
|
||||
background: #E30613;
|
||||
border-radius: 8px 8px 0 0;
|
||||
margin: -32px -32px 24px -32px;
|
||||
}
|
||||
</style>
|
||||
|
||||
<div class="login-wrapper">
|
||||
|
||||
<div class="login-card">
|
||||
|
||||
<!-- barra vermelha Oracle -->
|
||||
<div class="oracle-accent"></div>
|
||||
|
||||
<div class="login-title">
|
||||
Sign in
|
||||
</div>
|
||||
|
||||
<div class="login-sub">
|
||||
Oracle RFP AI Platform
|
||||
</div>
|
||||
|
||||
{% with messages = get_flashed_messages() %}
|
||||
{% if messages %}
|
||||
{% for msg in messages %}
|
||||
<div class="flash flash-error">{{ msg }}</div>
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{% endwith %}
|
||||
|
||||
<form method="POST" action="/login">
|
||||
<input
|
||||
type="text"
|
||||
name="username"
|
||||
placeholder="Email"
|
||||
required
|
||||
/>
|
||||
|
||||
<input
|
||||
type="password"
|
||||
name="password"
|
||||
placeholder="Password"
|
||||
required
|
||||
/>
|
||||
|
||||
<button class="btn login-btn">
|
||||
Sign In
|
||||
</button>
|
||||
</form>
|
||||
|
||||
<div class="login-footer">
|
||||
© Oracle RFP AI Platform
|
||||
</div>
|
||||
|
||||
</div>
|
||||
|
||||
</div>
|
||||
|
||||
{% endblock %}
|
||||
20
files/templates/users/set_password.html
Normal file
20
files/templates/users/set_password.html
Normal file
@@ -0,0 +1,20 @@
|
||||
{% extends "base.html" %}
|
||||
{% block content %}
|
||||
|
||||
<div class="card">
|
||||
|
||||
<h2>Set Password</h2>
|
||||
|
||||
{% if expired %}
|
||||
<p>Link expired or invalid.</p>
|
||||
{% else %}
|
||||
<form method="post">
|
||||
<input type="password" name="password" placeholder="New password" required>
|
||||
<input type="password" name="password2" placeholder="Confirm password" required>
|
||||
<button class="btn btn-primary">Save</button>
|
||||
</form>
|
||||
{% endif %}
|
||||
|
||||
</div>
|
||||
|
||||
{% endblock %}
|
||||
51
files/templates/users/signup.html
Normal file
51
files/templates/users/signup.html
Normal file
@@ -0,0 +1,51 @@
|
||||
{% extends "base.html" %}
|
||||
{% block content %}
|
||||
|
||||
<div class="card">
|
||||
|
||||
<h2>Create Access</h2>
|
||||
|
||||
<p class="small">
|
||||
Enter your email to receive a secure link and set your password.
|
||||
</p>
|
||||
|
||||
<form method="post">
|
||||
|
||||
<input
|
||||
type="email"
|
||||
name="email"
|
||||
placeholder="your@email.com"
|
||||
required
|
||||
/>
|
||||
|
||||
<br><br>
|
||||
|
||||
<input
|
||||
type="text"
|
||||
name="name"
|
||||
placeholder="name (optional)"
|
||||
/>
|
||||
|
||||
<br><br>
|
||||
|
||||
<button type="submit">
|
||||
Send password link
|
||||
</button>
|
||||
|
||||
</form>
|
||||
|
||||
<hr>
|
||||
|
||||
{% with messages = get_flashed_messages(with_categories=true) %}
|
||||
{% if messages %}
|
||||
{% for cat, msg in messages %}
|
||||
<div class="info-box">
|
||||
{{ msg }}
|
||||
</div>
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{% endwith %}
|
||||
|
||||
</div>
|
||||
|
||||
{% endblock %}
|
||||
Reference in New Issue
Block a user