Back to Blog

The Memory Problem of AI Agents: Why We Built a Cognitive Architecture from Scratch

LLM agents have no memory. Every conversation starts from zero. We translated Kahneman, Ebbinghaus, and McClelland's cognitive models into a production memory architecture.

April 5, 2026
16 min read
The Memory Problem of AI Agents: Why We Built a Cognitive Architecture from Scratch

CogMem-AI — We turned 140 years of cognitive science into a production memory architecture


Morpheus is VeriTeknik's AI assistant. It serves our enterprise customers — handling everything from server management to billing inquiries, DNS configuration to meeting scheduling, all through natural language. It has over 30 tools, 14 external integrations, and is reachable through 6 different channels.

But it had a problem: no memory.

On Monday, a customer says: "Don't restart the server at night, last time everything went sideways." Morpheus hears this, maybe uses it in that conversation. On Wednesday, someone else from the same company writes: "Let's open a maintenance window tonight, restart the servers." Morpheus says "okay." Because it doesn't remember Monday. The conversation ended, the context window closed, the information vanished.

This isn't an edge case. This is the fundamental structural problem of every AI agent on the market today.

An LLM's context window is not memory. It's what Baddeley described in 2000 as "working memory" — a temporary space held during a conversation, erased when it ends. The human brain has this limitation too (Cowan's 4±1 chunk rule). But the human brain isn't just working memory. There's long-term memory, habits, traumas, instincts. The context window covers none of these.

"But there's RAG," you'll say. RAG is a search engine, not memory. Memory lives: it strengthens, weakens, transforms, dies. You can't ask RAG "do you remember what I said last week?" — because it doesn't remember, it searches. The difference between searching and remembering is the difference between Google and the human brain.

The business impact of every conversation starting from scratch is real. Customers repeat themselves. Trust erodes. "It was supposed to be so smart, but it doesn't even know what I told it last time." And they're right.

We lived this problem with Morpheus. In production, with real customers, every day. We decided to solve it. But as we dug in, we realized this isn't a simple "save it to a database" problem.


"Just Save It to SQL" — Your First Thought Was Ours Too

When you first encounter the memory problem, the solution that comes to mind is probably this:

-- Naive memory: save and retrieve
INSERT INTO memories (user_id, content, created_at)
VALUES ('customer-abc', 'Don't restart the server at night', NOW());

-- Every conversation:
SELECT * FROM memories WHERE user_id = 'customer-abc'
ORDER BY created_at DESC LIMIT 50;

Simple. Works. We started here too. On our plugged.in platform, we built a three-layer memory prototype — Redis for short-term, PostgreSQL for medium-term, pgvector for semantic search. Ran it with over a thousand users. It worked, up to a point.

Then it didn't. Because this approach has six structural problems, and none of them are solved by "writing better SQL":

Everything has equal priority. "My name is Cem" and "We had a cascade failure last month, 4 hours of downtime" sit in the same table with the same weight. But the human brain doesn't work this way. Traumatic events — shocks — are always front and center. Nobody needs to ask your name, but you must remember that cascade failure in every server conversation.

Signal-to-noise ratio drops. When 500 memories accumulate, what do you do? Stuffing them all into the LLM's context window blows the token budget. Pull the last 50? What if the critical signal — a security rule recorded 6 months ago — isn't in those 50? Chronological order is not importance order. As memory grows, noise increases while signal stays the same. As SNR drops, so does the agent's response quality. This is information theory's fundamental problem — and "LIMIT 50" is not a solution.

It doesn't know how to forget. "Staging server IP: 192.168.1.50" from two years ago is still there. The server changed long ago but nobody deleted that record. Morpheus gives the customer the wrong IP. The human brain weakens unused information over time and forgets it — this isn't a bug, it's a feature.

No reinforcement. Information recalled every meeting that consistently produces good outcomes sits at the same level as something mentioned once and never used again. Humans don't work this way. Repeated, useful information strengthens. Unused information fades.

No context filter. The customer is asking about a server issue, but billing info, domain notes, and past meeting records are all being stuffed into the prompt. Noise increases, accuracy drops, tokens are wasted. The brain doesn't activate billing memories during a server conversation — there's an attention mechanism.

No audit trail. In a PCI-DSS audit, you can't answer "why did your agent make this decision? Based on what information?" A flat SQL table is not an audit trail. We're a company that spun off a startup from our PCI-DSS expertise, Turkey's first Level 1 service provider — this question directly concerns us.

These six problems seem individually solvable. Add a priority column, set a TTL, build a tag system. But every addition complicates another problem. You end up not with a memory system but with patches stacked on patches.

We reached that point. Then we stopped and changed the question: instead of "how should a memory system work?" we asked "how does memory work?" We looked at cognitive science.


How Memory Actually Works — The Cognitive Science Model

When we changed the question, the answer changed too. Instead of "how do we store memory in a database?" we asked "how does human memory work, and how do we translate that into software?" We found three theoretical foundations.

Kahneman — System 1 and System 2 (2011). Daniel Kahneman's "Thinking, Fast and Slow" divides human cognition into two channels. System 1 is fast, automatic, reflexive: when someone says "server," your brain instantly activates server-related information. System 2 is slow, analytical: it kicks in when you say "analyze all of this customer's server issues over the last 6 months." In engineering terms: System 1 is an index lookup, System 2 is a full table scan. You need both, but at different times.

McClelland — Complementary Learning Systems (1995). McClelland and colleagues investigated why the brain doesn't immediately write new information to permanent memory. The answer: new information first enters a temporary buffer — the hippocampus. If it's recalled repeatedly and proves useful, it gets transferred to permanent memory — the neocortex. The reason for not writing immediately is to prevent incorrect or temporary information from polluting permanent memory. Think of it like Git: new information enters the staging area. If it gets committed repeatedly and passes review, it merges to main. If it doesn't — discard.

Ebbinghaus — The Forgetting Curve (1885). Hermann Ebbinghaus discovered 140 years ago that unused information weakens exponentially over time. But each recall resets the curve — the information strengthens again. This is nature's garbage collection mechanism. Unused information gets cleaned up so it doesn't keep taking up space. But frequently used information doesn't get deleted — it grows stronger.

When these three theories came together, we realized: this is exactly a software architecture problem. Fast channel / slow channel separation. Temporary buffer / permanent store separation. Strengthening through reinforcement, weakening through disuse. Rings, thresholds, triggers, lifecycle management.

As an engineering firm, we already look at things analytically. When we read these models, we saw that the memory structure described by cognitive science was essentially a specification waiting for implementation.

I first described this vision in September 2025, in a Medium article about plugged.in. I sat down with a piece of paper and drew how my own memory works — Focus Agent at the center, surrounded by Fresh Memory, Memory Ring, Gut Agent. The concentric rings in that article were a philosophical framework. CogMem-AI is its concrete implementation: a BIOS layer was added, a reinforcement engine was designed, Kahneman's System 1/2 became a real Focus Agent, Ebbinghaus's forgetting curve became a nightly cron job. Same vision, new engineering.

But the real breakthroughs came from analogies in my own life.

One day, a newborn baby came to mind. A newborn knows how to nurse. Nobody teaches it. This reflex is genetically coded behavior — firmware. Then I realized: an AI agent also needs immutable principles. A rule like "never log card data" shouldn't be learned, it should come built-in. Just like a baby's nursing reflex. That's how the BIOS layer was born — a much harder, much more explicit version of the "Policies" concept from the Medium article.

Then I caught myself explaining something to my daughters. I bring up a topic, they say "Dad, we haven't covered this at school yet." I tell them: "You don't need to understand it now. But when you hear it at school, there'll be a spark in your brain. You'll have that 'aha' moment. You'll reinforce it then, and you won't forget it for years." This is exactly recall-gated reinforcement. The first encounter enters fresh memory; on the second encounter, recall is triggered; if there's a successful match, the information consolidates. What I was telling my daughters was actually the mechanism that Ebbinghaus and Lindsey et al. (2024) experimentally proved — I was just saying it in dad language.

I thought about the Attention Agent. This agent didn't need to use a good model — its real job was filtering. An intelligent buffer zone between fresh memory and long-term memories. A filter that only passes information on relevant topics. Exactly like the relationship between CPU cache memory and RAM: cache is small but fast, RAM is large but slow, and the cache controller in between decides what to fetch.

Then I thought about how sleep has a purpose. The human brain processes the day's experiences overnight — consolidating what matters, discarding what doesn't. "Sleep consolidation" in neuroscience is exactly this. And I said: we need a nightly cron too. A nightly job that collects the day's recall statistics, reinforces successful memories, weakens unused ones. Sleep is nature's garbage collection. We did the same.

And finally, principles and procedures. We study for years, we forget most of the details. I can't solve a Fourier transform right now, for example. But 25 years later, I still remember it's about "sampling from a wave." The detail is gone, the essence remains. Ebbinghaus's forgetting gradient does exactly this — detail weakens, but the core concept stays. More importantly: I can now see that my entire academic life shaped my principles and procedures. Being able to code non-stop for 14 months, for instance — I acquired that ability while writing my thesis. I don't remember the details of that period, but the discipline — the procedural memory — came back when needed.

CogMem-AI was born from these observations. Not a database table. A cognitive architecture. And it's now running in production.


CogMem-AI Architecture — 6 Rings + BIOS

In the September 2025 Medium article, I drew the architecture as concentric rings. That vision still holds — but a lot changed in 7 months. Concepts became concrete, new layers were added, thresholds were set, production realities were faced.

Here's CogMem-AI's current architecture, bottom to top:

Focus Agent — The Self

The Focus Agent isn't a helper service. It's the agent's consciousness at that moment. My brain's Focus Agent is writing this article right now. Your Focus Agent is active as you read it. When a customer writes "nginx is giving 502," Morpheus's Focus Agent is focused on solving that problem — it's the one doing the actual work.

During this focus, the Focus Agent naturally knows its context: what topic it's working on, what the intent is, the urgency level. This context information becomes topic tags as a byproduct. This is Kahneman's System 1 — not conscious analysis, reflex-level awareness.

But the Focus Agent doesn't deal with memory. Just as you're not consciously deciding which memories your brain should retrieve while reading this paragraph. That's the Attention Agent's job.

Why do we need a Focus Agent? Think of a light bulb and a laser with the same power. The bulb spreads the same energy in every direction — it illuminates a room but can't cut anything. The laser concentrates the same energy on a single point — it can cut through steel. An agent without a Focus Agent is a light bulb: it stuffs all memories, all context, all history into the prompt. Scatters the same token budget in every direction. An agent with a Focus Agent is a laser: it concentrates the same budget on a single topic. Same power, different impact.

Attention Agent — The Attention Filter

The Attention Agent is the background process that the Focus Agent isn't even aware of. It takes the context information produced by the Focus Agent and performs bidirectional filtering:

Outbound (memory to conversation): "The Focus Agent is talking about nginx right now. Which memories are relevant? Shocks and dos_and_donts always come through. From long_term, only nginx-tagged ones. Don't fetch billing information — that's noise."

Inbound (conversation to memory): "The customer said something important in this conversation. Which ring should this information go to? A rule (dos_and_donts), a habit (habits), or an unproven fresh piece of information (fresh)?"

In CPU terms: the Focus Agent is the running process, the Attention Agent is the cache controller. The process knows what it wants; the cache controller fetches the appropriate data. The process doesn't even need to be aware of the cache — but without the cache controller, everything comes from RAM and the system slows down. In our system too, without the Attention Agent, all memories arrive unfiltered — SNR drops, the token budget blows up, response quality collapses.

Fresh Memory — The Buffer Zone

Every new piece of information lands here. The default entry point. That first encounter I described to my daughters — "you don't need to understand it now" — that's fresh memory. The information is unproven, unreinforced, temporary.

Accessed via semantic search. If it's not reinforced, it weakens over time and gets deleted. The purpose of this buffer is to prevent incorrect or temporary information from polluting permanent memory — the software equivalent of McClelland's hippocampus model.

Memory Ring — 6 Rings

Six rings, each with different persistence and injection rules:

long_term — Proven facts. "The customer has 3 servers in Ankara." Reinforced, high success score. Only injected in relevant conversations (topic filtered). If reinforced, decay stops.

habits — Recurring patterns. "Always wants a backup before deployment." Saying it once doesn't make it a habit — it has to be recalled successfully multiple times before being promoted to this ring. This was "Practice Memory" in the Medium article; the name and mechanism are now refined.

procedures — Step-by-step procedures. "DNS change: lower TTL, wait, change, raise TTL." Explicit, documented workflows. Topic filtered injection.

dos_and_donts — "Don't restart the server at night." Explicitly stated rules. Injected in every conversation, no topic filter. No decay — never weakens. Because the cost of forgetting this information far exceeds the cost of remembering it.

shocks — Traumatic events. "2024 cascade failure, 4 hours of downtime." Like touching a hot stove — once is enough. Injected in every conversation, never deleted. Whatever the Focus Agent says, shocks are always in the prompt.

The distinction between "always inject" and "topic filtered" is one of CogMem's most critical design decisions. dos_and_donts and shocks bypass the attention filter because they're safety critical. Other rings only appear in relevant conversations because we need to preserve SNR. This distinction seems simple but doesn't exist in the naive SQL approach — there, either everything comes or nothing does.

BIOS — The Firmware Layer

The Medium article had "Policies." In CogMem, this layer took on a much harder identity: BIOS.

A computer's BIOS runs before the operating system loads. It's immutable. Users can't modify it. CogMem's BIOS is the same: immutable principles. PCI-DSS rules, security policies, company standards. Like a baby's nursing reflex — not learned, given.

Today, 108 BIOS rules are defined across more than 14 categories. Examples: "Always run config test before restart," "Never echo auth codes," "PCI-DSS: never log card data." These rules are injected from the relevant category based on the conversation topic. Server conversations get server BIOS rules, security conversations get security rules.

No competitor — not MemGPT, not Mem0, not Zep — has a layer like this. This is an original contribution born from VeriTeknik's PCI-DSS experience. Because in an audit room, "it decided based on whatever it randomly pulled from the database" is not an acceptable answer. BIOS ensures the agent's decisions have a deterministic, auditable foundation.

Gut Agent — Collective Wisdom

The outermost ring. It was in the Medium article, it's in CogMem too — but not yet at full capacity in production.

The concept is the same: anonymized pattern clustering from all customers. Not any single customer's specific data, but general patterns distilled from all customers. "The intuition of an experienced engineer" — wisdom distilled not from one project, but from hundreds.

The CRUD infrastructure and anonymization mechanism have been implemented. But cross-tenant clustering is still in pilot phase. This is the most challenging layer both technically and from a privacy perspective — extracting meaningful patterns under a differential privacy budget is an open research problem.

Token Budget — Limited Attention Resources

Human attention is limited. Baddeley's working memory model defines the amount of information that can be held simultaneously as 4±1 chunks. LLM context windows are limited too — but more importantly, what you put in the context window directly affects response quality.

In CogMem, a 3,200 token memory budget is allocated for each message cycle. This budget is distributed across 8 layers in this priority order:

BIOS (300) → shocks (300) → dos_and_donts (400) → gut (400) → long_term (500) → habits (200) → procedures (300) → fresh (300)

When the budget fills up, lower-priority layers get cut. But BIOS, shocks, and dos_and_donts are never cut — safety-critical information is always in the prompt. This is an attention economy that maximizes signal-to-noise ratio.


The Reinforcement Engine — What's Remembered Strengthens, What's Forgotten Fades

Memory rings aren't a static store. They're a living system. Information enters, strengthens, gets promoted, weakens, dies. The mechanism managing this lifecycle is what we call the reinforcement engine.

The foundation is a simple observation: frequently recalled information that produces good outcomes is important. Information that's never recalled is probably not. The human brain already does this — Lindsey et al. demonstrated the "recall-gated plasticity" mechanism experimentally in their 2024 eLife paper: the act of recalling strengthens information; unrecalled information weakens. We translated this into software.

How It Works

Every memory record has two counters: how many times it was recalled (recall count) and how well it worked when recalled (success score). When a memory is used in a conversation, it's logged in the recall log. If the conversation ended successfully, the success score goes up.

Then night falls.

The human brain processes the day's experiences overnight. "Sleep consolidation" in neuroscience — during sleep, important memories consolidate, unimportant ones are discarded. We run a nightly cron job with the same logic. Every night, it collects the day's memory statistics and makes one of five decisions:

Reinforce. If a memory was recalled 2 or more times with a success score above 80% — this information is working. Confidence rises to maximum, decay stops. This memory is now protected.

Promote. If it was recalled 3+ times, success score above 90%, and still in the fresh ring — this information has proven itself. It gets flagged as a promotion candidate to the next ring up. That "aha moment" I described to my daughters — information moving from fresh to long_term because it was recalled successfully over and over.

Weaken. If it was recalled 2+ times but the success score is below 30% — this information is being recalled but isn't working. Maybe outdated, maybe wrong. Confidence drops by half. It drops further the next night. The system actively penalizes incorrect information.

Forget. If it was never recalled, older than 30 days, and not reinforced — Ebbinghaus's forgetting curve kicks in. Confidence decreases by 10% every 30 days. When it drops below 20%, the memory is quietly deleted. 140 years of cognitive science, in a cron job.

Protect. If the block type is shocks or dos_and_donts — decay is never applied. The "don't restart the server at night" rule will stay there even if it's not recalled for 5 years. The "cascade failure happened" record stays in the prompt forever. The memory of touching a hot stove doesn't fade.

Why Manual Deletion Isn't Enough

"An admin can delete outdated information," you might say. But consider 100 customers, each with hundreds of memory records. Who's tracking which information has become stale? The reinforcement engine automates this: what works stays, what doesn't fades, what's dangerous never gets deleted. Nature's garbage collection mechanism — not manual, organic.

Threshold Table

State

Condition

Result

Reinforce

recall ≥ 2, score ≥ 0.8

Max confidence, decay stops

Promote

recall ≥ 3, score ≥ 0.9

Promotion candidate

Weaken

recall ≥ 2, score < 0.3

Confidence halved

Forget

30 days, not reinforced

Confidence -10%, deleted at < 0.2

Protect

shocks / dos_and_donts

Decay never applied


What Are Others Doing?

We're not the only ones seeing this problem. There's serious work on AI agent memory — MemGPT (now Letta), Mem0, Zep, A-MEM, LangMem. Good engineers behind all of them, all trying to solve a real problem. But their approaches differ.

Hu et al.'s comprehensive 2025 survey (arXiv:2512.13564), after scanning LLM agent memory systems, explicitly identifies reinforcement learning integration as an "open problem." Wang et al.'s 2024 survey also finds no reference implementation for dual-process cognitive integration at production scale. The academic literature sees this gap too.

The common thread of current systems: they all keep memory in a flat structure. Add, search, delete. Some do it well, some do it very well. But none ask: "How reliable is this memory? When will it strengthen? When will it die? In which conversation is it needed, in which is it noise?"

Let's look at concrete differences:

Memory structure. MemGPT uses a flat two-layer structure — core memory and archival memory. Mem0 uses a flat extract-update cycle. Zep has partial categories. CogMem-AI has 6 rings + BIOS, and each ring has different injection rules, different decay policies, different token budgets.

Reinforcement. None of them have it. A memory is added, called, or deleted — that's it. The concept that being recalled strengthens a memory, and not being recalled weakens it, doesn't exist in current systems.

Forgetting. MemGPT uses FIFO — the oldest record gets deleted. This is chronological, not importance-based. A 5-year-old cascade failure record gets deleted for being oldest; yesterday's trivial note stays. Mem0 and A-MEM have no structured forgetting mechanism. CogMem has the Ebbinghaus gradient — unused information fades over time, but critical information is never deleted.

Attention filter. None of them have a System 1/System 2 distinction. When memory is called, everything comes or gets keyword-filtered. In CogMem, the Focus Agent classifies the topic, and the Attention Agent retrieves only relevant rings. Billing information doesn't appear in server conversations.

Immutable principles. None of them have a BIOS layer. There's no distinction between learned information and information that shouldn't be learned but should be given. CogMem has 108 BIOS rules — PCI-DSS, security policies, operational standards. These aren't learned, don't change, and are always there.

Audit trail. Being able to answer "why did the agent make this decision?" in a PCI-DSS audit is mandatory. Most current systems lack a structured audit trail. CogMem has a 5-level, immutable audit trail — because we're a company that goes through these audits.

Feature

MemGPT

Mem0

Zep

CogMem-AI

Memory structure

2-layer (flat)

Flat extract/update

Partial categories

6 rings + BIOS

Reinforcement

None

None

None

recall x success

Forgetting

FIFO

None

None

Ebbinghaus gradient

Attention filter

None

None

None

Focus + Attention Agent

Immutable principles

None

None

None

BIOS (108 rules)

Cross-tenant wisdom

None

None

None

Gut Agent (designed)

PCI-DSS native

None

None

None

Level 1 certified

Don't read this comparison as "we're good, they're bad." These systems work well in their own contexts. But they all stay within the "add memory, search memory, delete memory" paradigm. We're asking a different question: how does memory live, how does it strengthen, how does it die, and how is it audited? That question leads to a different architecture.


Why We Wrote an R&D Project

CogMem-AI's core architecture is running in production. 6 rings, BIOS, Focus Agent, Attention Agent, reinforcement engine — all of it with real customers, in real conversations, every day. So why did we also write a TÜBİTAK 1501 R&D project?

Because there's a chasm between working and working correctly. And bridging that chasm requires research, not engineering.

Think about the reinforcement thresholds. Currently, a memory consolidates after being successfully recalled 2 times. Why 2? Why not 3? Why not 1.5? We set this threshold intuitively and it works reasonably — but we don't know if it's optimal. To know that, you need A/B testing in a production environment with real customer data. This isn't a configuration decision; it's an experimental calibration problem.

Or think about the definition of "successful conversation." The reinforcement engine uses success_score — but who determines success? If the customer said "thanks," is it successful? If they closed the conversation without saying anything, is it unsuccessful? Automatically detecting the difference between silent acceptance and active confirmation is an open research problem.

Or think about the Gut Agent's privacy engineering. We want to extract anonymized patterns from different customers — but does meaningful information survive under a differential privacy budget (ε=1.0)? Finding the balance between privacy and utility is a problem that needs to be experimentally tested with 10,000+ synthetic records.

These aren't things solved by "writing better code." These are research questions requiring the cycle of form hypothesis, design experiment, measure, calibrate, repeat. R&D in the sense defined by the Frascati Manual: systematic work involving technological uncertainty with unpredictable outcomes.

The cognitive science dimension of the project demands this too. We're translating Kahneman's System 1/2, McClelland's complementary learning systems, and Ebbinghaus's forgetting curve into software — but the validity of these models in the LLM agent context hasn't been academically tested yet. Hu et al.'s 2025 survey states this explicitly: reinforcement learning integration for memory management is still an open problem.

That's why we're running the project with two academic advisors: Prof. Dr. Turgay Çelik from the University of Agder, Norway (CAIR — Centre for Artificial Intelligence Research) and Dr. Kutluhan Erol from İzmir University of Economics. Our targets include SCI-indexed academic publications and patent applications. This isn't just a product development project — it's the first production-scale test of cognitive science models in the LLM agent context.

One more thing. We're an infrastructure and security engineering firm. We've been managing servers for 20 years, going through PCI-DSS audits, writing firewall rules. That 20 years of accumulated knowledge — knowing which server configuration will cause problems, which DNS change is risky, which security vulnerability is critical — is currently encoded in BIOS rules and the Gut Agent's preset knowledge. CogMem-AI is a project to transform this accumulation into a scalable structure. The raw material of the rule base isn't academic — it's operational. 20 years of field experience.


Vision — An Agent Without Memory Is Not an Agent

AI agents today operate without memory. Every conversation starts from zero. Every customer repeats themselves. Every error is handled as if it's happening for the first time. This makes agents tools — smart tools, but still tools.

Memory will make them partners.

An agent that learns from experience, knows what's important, understands what to forget, draws lessons from past traumas, and feeds on collective wisdom — that's no longer a tool, it's a colleague. You explain something to your junior twice, the third time they know it. An agent with memory should do the same.

CogMem-AI is currently running in Ops Hub as a separate container called morpheus-memory. The cognitive memory architecture — 6 rings, BIOS, Focus Agent, Attention Agent, reinforcement engine — is in production, with real customers, every day. The Gut Agent's cross-tenant clustering layer has been designed and its infrastructure is ready, moving to pilot phase.

On our roadmap is an open-source npm package — cogmem-ai, under the Apache 2.0 license. Sector-specific BIOS template libraries: PCI-DSS v4.0, ISO 27001:2022, KVKK presets. When other companies want to add cognitive memory to their agents, they shouldn't have to start from zero.

The real agent economy can't be built until AI's memory problem is solved. Agents need to evolve from task-executing tools into partners that understand context, learn from experience, and know how to forget. 140 years of cognitive science has told us how we remember and why we forget. Now it's time to translate that into software.

We've started.


Cem Karaca — VeriTeknik Bilişim CEO / Founder

GitHub · VeriTeknik · Previous article: Building Digital Consciousness (Medium, September 2025)

...