© 2025 — Ahmed Shaban

© 2025 — Ahmed Shaban

11:22
Icon

Contact

Icon

Contact

Let's Get in Touch!

l

l

l

Terminus

Terminus

Terminus

Trustworthy AI Grounded in Peer-Reviewed Research

Trustworthy AI Grounded in Peer-Reviewed Research

Terminus was built during a 48-hour hackathon as a working prototype of a trustworthy, locally deployable AI system. It combines Retrieval-Augmented Generation (RAG) with a local LLM (Ollama3) to ensure every response is backed by verifiable peer-reviewed research.

Unlike generic AI assistants, Terminus never fabricates information. Every answer links directly to its original source, complete with author, journal, and year.

Core Outcome:

  • 100% citation accuracy

  • 0% hallucination rate

  • Fully functional local deployment on standard laptop hardware

AI_in_Action Hackathon

AI_in_Action Hackathon

Background

This project began through the Rockwell Fellowship, where I was paired with Kris Rockwell as a real-world client. The fellowship aimed to bring together entrepreneurship, product development, and new technologies through hands-on work with real client problems.

During discovery meetings, Kris expressed a need for AI systems that could be trusted in secure, research-heavy environments, settings like healthcare, engineering, and academia where sending data to cloud APIs wasn’t possible.

He highlighted five critical challenges:

  • Privacy & Security: Sensitive data couldn’t be processed in the cloud.

  • Reliability: Most AI systems hallucinate or provide unverifiable claims.

  • Domain Specificity: General-purpose models lack research-level depth.

  • Cost Control: API-based models were too expensive to scale.

  • Offline Access: Research teams needed tools that run locally.

When the university announced the 48-hour hackathon challenge: “AI Reliability Through Peer-Reviewed Research Integration”, it was a perfect alignment. The challenge directly matched Kris’s real-world need.

Goal

To create a system that restores trust in AI by combining:

  1. Local Privacy – Runs entirely offline via Ollama3.

  2. Research Integrity – Sources only peer-reviewed publications.

  3. Traceable Results – Every claim is verifiable through citation validation.

  4. Transparency – Admits when information isn’t available, instead of guessing.

  5. Accessibility – Easy to deploy for universities, research labs, and small teams.

Research

I conducted interviews, literature reviews, and user validation to confirm the need.
Findings included:

  • 80% of AI users express concerns about misinformation and hallucinations (Forbes).

  • Researchers lose time verifying AI results manually.

  • Small research firms can’t afford enterprise AI licenses or external APIs.

Use Cases Identified

  • University labs conducting literature reviews.

  • Small R&D teams validating environmental materials.

  • Professors and students seeking reliable AI tutoring tools.

Solution Space

Terminus integrates modern AI architecture with research-grade verification.

Architecture Highlights

  • Document Ingestion Layer: Extracts and embeds peer-reviewed papers for semantic search.

  • RAG Engine: Retrieves the most relevant sections of text.

  • Ollama3 Core: Generates responses strictly from retrieved documents.

  • Citation Validator: Confirms every claim matches the cited paper.

  • User Interface: Displays clear answers with linked citations and confidence scores.

Tech Stack:

  • Ollama3 (Local LLM)

  • LangChain (RAG pipeline)

  • ChromaDB / FAISS (Vector database)

  • Python + Gradio Interface

  • Sentence Transformers for embeddings

Result:
A system that delivers academic-quality answers, complete with sources and zero fabrication.

Final Product & Learning

The final prototype was fully functional and demonstrated during the hackathon:

  • Query Example: “What is the environmental impact of PLA vs traditional plastics?”

  • Response: Terminus cited verified studies from Nature Materials and Journal of Applied Polymer Science, each cross-checked for accuracy.

  • Verification Outcome:

    • Citation Accuracy: 100%

    • Hallucination Rate: 0%

    • Relevance: 100%

    • Evidence Strength: 100% peer-reviewed

Icon

Contact

Let's Get in Touch!