Tier 3 - AI Infrastructure

We don't talk about AI, we run it in production

Morpheus AI serving 50+ clients, plugged.in with 2,700+ MCP server registry, TÜBİTAK-backed cognitive memory R&D. We combine AI infrastructure engineering with enterprise infrastructure experience since 2004.

50+
Morpheus AI
Production clients
2,700+
MCP Servers
plugged.in registry
1,062+
Users
plugged.in platform
TRL 5→7
CogMem-AI
TÜBİTAK R&D target

Morpheus AI — Production in 50+ Clients

Morpheus is VeriTeknik's AI assistant integrated into the Ops Hub platform. Server ordering, domain management, service monitoring, invoice queries — automating infrastructure operations with 30+ AI tools.

30+ AI Tools

VPS ordering, domain management, invoice queries, service monitoring, payments, SMS 2FA — ActionContext architecture

Cognitive Memory

6 memory rings: fresh → long_term → habits → procedures → shocks → dos_and_donts. Reinforcement-based lifecycle, BIOS layer

Dual-Process Processing

Focus Agent (System 1, ~100ms): ikigAI local model. Analytics Agent (System 2): deep analysis. Kahneman dual-process theory

plugged.in — 2,700+ MCP Servers, Open Source

plugged.in is an open-source platform that manages AI agent memory, tools, and knowledge base. Actively used by 1,062+ developers.

MCP Registry
2,700+ MCP servers, discovery and integration
Persistent Memory
Cross-session persistent memory, Jungian archetype system
Knowledge Base (RAG)
Document upload, chunking, embedding, querying
Collective Wisdom
Anonymous pattern extraction (HMAC-SHA256, k-anonymity)
SDKs
JavaScript, Python, Go
Self-Hostable
MIT license, run on your own infrastructure

Jungian Archetype System

🔴ShadowWarns of dangers

"Friday afternoon deploys fail 3.4x more often"

🔵SageOffers proven solutions

"Docker volume permission error: chmod 755 on host dir"

🟡HeroProvides complete workflows

"Deploy → verify → rollback sequence for your K8s pipeline"

🟣TricksterFlags hidden edge cases

"This namespace config silently drops health checks under load"

Our GPU, Our Model

Local inference with vLLM tensor parallelism on dual RTX 3090 (48GB VRAM). Morpheus's Focus Agent runs here — client data doesn't leave, KVKK compliant.

  • Hardware: Dual RTX 3090, ASUS ROG Maximus XIII Hero Z590
  • Software: vLLM, LiteLLM proxy
  • Model: Gemma 4 E2B-IT (Focus Agent), Qwen3 series
  • Routing: LiteLLM → vLLM (local) or Claude/GPT (external)

zvec — Sub-Millisecond Vector Search

Based on Alibaba's Proxima engine, zvec is the vector search infrastructure for plugged.in and Morpheus AI. Migrated from Milvus — superior in performance and operational simplicity.

TÜBİTAK 1501 R&D

CogMem-AI — Cognitive Memory Architecture for AI Agents

CogMem-AI is a cognitive memory architecture inspired by Kahneman's dual-process theory and McClelland's complementary learning systems theory.

Dual-Process Cognitive Memory

System 1 (~100ms) and System 2 accessing the same substrate without conflict

Reinforcement Lifecycle

Promotion chain calibration with recall × success across 6+ rings

Knowledge Engineering Inference Engine

Conflict management between LLM probabilistic output and deterministic rules

Collective Gut Agent

Cross-tenant pattern clustering under differential privacy

Prof. Dr. Turgay Çelik (University of Agder, CAIR) · Dr. Kutluhan Erol (İzmir Ekonomi Üniversitesi)

2 SCI papers · 1 patent application (TPE + EPO) · TRL 5→7

PAP — Open Source Agent Lifecycle Protocol

An open protocol managing the discovery, authorization, execution and termination lifecycle of autonomous AI agents. Apache 2.0 licensed.

Dual-profile: PAP-CP (gRPC/mTLS) + PAP-Hooks (JSON-RPC/OAuth)

AI/ML Ops Services

We bring our product-proven experience to our clients.

RAG System Setup

Smart answers from your documents — with plugged.in Knowledge Base.

  • Document upload and chunking with plugged.in KB
  • Semantic matching with zvec vector search
  • Hybrid recall: semantic + keyword search
  • Token budget management and context optimization

MCP Integration

Connect the 2,700+ MCP server ecosystem to your agents.

  • MCP server discovery via plugged.in registry
  • Custom tool development
  • PAP Protocol compliant agent integration
  • Multi-provider routing (LiteLLM)

AI Agent Development

Production-ready agent architecture with Morpheus reference.

  • Proven ActionContext architecture with 30+ tools
  • Cognitive memory integration (CogMem)
  • Development with Claude Agent SDK
  • Autonomy levels and security layers

LLM Infrastructure

Local or cloud — KVKK compliant inference infrastructure.

  • ikigAI local inference (dual RTX 3090, vLLM)
  • LiteLLM multi-provider routing
  • Scalable inference with tensor parallelism
  • KVKK compliant local deployment option

Vector Database

Sub-millisecond vector search with zvec.

  • zvec (Alibaba Proxima) — production-grade performance
  • Migration experience from Milvus
  • pgvector — PostgreSQL integration for RAG v2
  • Embedding model optimization

Our Open Source Projects

plugged.in AppAI infrastructure platformMIT
plugged.in PluginClaude Code plugin, memory + archetypesMIT
plugged.in MCP Proxy2,700+ MCP server proxyMIT
PAPAgent ProtocolApache 2.0
KintsugiAI-native content annotationMIT

13 open source repos under VeriTeknik GitHub organization.

Why VeriTeknik — AI/ML Ops

We use our own products
Morpheus AI manages our own clients. 50+ production clients = the best testing ground.
R&D exists, not just integration
CogMem-AI TÜBİTAK project, academic publications, patent application.
Open source ecosystem
plugged.in is MIT licensed. No vendor lock-in. Fork it, modify it, contribute.
20 years infrastructure + AI
We didn't discover AI in 2024. We build on 20 years of PCI-DSS, Kubernetes, network engineering.

Let's Design Your AI Infrastructure Together

Let's evaluate your AI/ML needs. RAG, agents, inference infrastructure — what's the priority?

  • AI infrastructure needs analysis
  • Current stack assessment
  • Schedule a technical call
Morpheus
Online

AI infrastructure needs analysis