- Be Datable
- Posts
- TimeShift* is Happening *(Again)
TimeShift* is Happening *(Again)
How Memory Expansion, Strategic Forgetting, and Architectural Shifts are Changing AI
Something incredible is happening (again) with time.
I’m using “TimeShift” to describe the recurring pattern where breakthrough innovations restructure how humans spend their productive hours.

You have drastically more Time than those before you… again.
Each major technological leap, from mechanization to electrification to digitization, has shifted time allocation in ways that redefined entire economies. We're entering another TimeShift now, driven by AI memory architecture and agentic systems.
Innovation has always compressed time by creating efficiency. We build tools that give us back hours in our day. But what's coming with AI agents and platforms that extend memory and compute across thousands of interactions is different in scale and consequence.
One worker becomes 50 workers. Each of those 50 workers does an hour of work in one minute. Simultaneously. Asynchronously.
We're about to see an expansion of available labor time that has never existed before. In 1948, Norbert Wiener saw this coming when he wrote in Cybernetics:
"Let us remember that the automatic machine is the precise economic equivalent of slave labor. Any labor which competes with slave labor must accept the economic consequences of slave labor."
- Norbert Wiener
Father of Cybernetics, 1948
He was thinking about industrial automation. We're now extending this to knowledge work, and soon to robotics at scale.
As I wrote about last week, Pedro Domingos and Greg Brockman noted that one AI year now equals seven internet years in terms of adoption velocity and capability advancement. This isn't a metaphor.
The implications for how we build AI products is already hitting (but unevenly.)
AI-generated content just crossed 52% of all online articles published in English. Machines now produce more written content than humans do.

This is based on 65,000 articles. My guess is that “hybrid” drastically changes this chart.
There's a lot of hybrid content in between, though.
For example, I use voice recordings repeatedly throughout the week as I curate information to discuss on this blog. It's still all my voice and thoughts. The difference is I've moved from Grammarly-level editing to something more concrete using a combination of Claude, Gemini, and ChatGPT.
Each model has their strengths, but I don't think they replace the complexity of my mind trying to make sense of everything happening each week.
This acceleration connects to one central dynamic: the relationship between memory and compute.
Claude Skills restructures this relationship in ways that change how we should build AI products.

The 44-Year Battle Between Memory and Compute
AI capability has always been a balance between two resources. This isn't new.
From 1982 to 2007, we made steady progress along a relatively predictable path of increasing both compute and memory.
The Commodore 64 (my first computer) gave way to more powerful processors, then CUDA GPUs unlocked parallel processing in 2007.
The SSD era, starting around 2010, created a dramatic shift in the memory axis. Storage became fast enough and cheap enough that we could start thinking about AI problems differently. GPT-3 in 2020 represented the first leap where massive models could actually be deployed.
GPT-5 continued that trajectory.
But look at GPT-6 on the chart. The dotted line shoots almost vertically. This is just a guess, but Sam Altman has declared that GPT-6 is all about memory.
This represents the coming phase where memory expansion becomes the dominant force in capability improvement. We're about to see models that can maintain context across entire codebases, full company knowledge bases, and extended multi-day conversations.
Managing context efficiently is now the primary constraint.
Claude just launched Skills. If you haven't looked at this yet, imagine you could increase memory so your context window doesn't get tired by storing skills elsewhere, outside your chat. Skills are packaged instructions that Claude can access when needed. You can extend Claude's memory without exhausting your context window or paying for repeated explanations of how you work.

This is so damn promising. Like Projects & ClaudeCode had a baby.
I've been experimenting with this over the weekend. The ability to have flowing dialogue with an AI that maintains context through Skills rather than raw conversation history makes the interaction different.
Oh, and it works.

What This Means for Building AI Products
Aaron Levie, CEO of Box, summarized the current state better than most technical analyses I've read. He’s fantastic at that, definitely worth a follow.
We're years away from magical superintelligence. But what we have right now are models with strong general reasoning that improve in specific domains through applied context.

AI may take years to get to AGI - but what we have now IS worth $Trillions.
The key phrase to consider is "applied context."
Models fail when they lack the right context about your business, your data, your processes, and your constraints.
Most SaaS companies are still thinking about AI as a feature you add to existing workflows. I believe this is the wrong approach.
The companies that will win are thinking about AI as the interface itself, with domain context and business logic packaged as accessible memory rather than hardcoded rules.
Developers have an advantage here. Their workflows through IDEs lend themselves naturally to agentic patterns. Most business domains don't have this clean of an analog.
Whoever figures out how to package domain expertise as Skills or similar memory constructs will win. Building the AI is easy. Structuring knowledge so AI can use it effectively without requiring the entire context in every conversation is hard.

This also reminds me of “The Book of Why” by Judea Pearl.
And Pedro Domingos might be that guy. Author of the “Master Algorithm” (2015) - I often refer to his work, and this latest paper is insane.
The tensor logic paper he just penned gets at this from a more technical angle. It proposes a unified language that combines neural and symbolic AI at a base level. The core observation is that logical rules and Einstein summation are essentially the same operation. This means you can elegantly implement reasoning, learning, and knowledge representation in a single framework.

This paper proposes a language to connect Logic and Intelligence.
For business applications, this matters because AI will be able to reason about domain constraints using both learned patterns and explicit rules. That combination makes AI work in regulated industries or complex business contexts where pure neural approaches (tend to) fall short.

Forgetting as a Feature, Not a Bug
During Andrej Karpathy's recent podcast, he made an observation that connects to something I experienced at a conference in Rome. Karpathy noted that humans not remembering everything is a feature rather than a bug.
This seems counterintuitive when we're focused on expanding AI memory. But it's central to understanding how to build better systems.
I was at the SIINDA conference where the Vatican art director gave the opening keynote about plenary indulgences during this jubilee year. She walked through the theology of forgiveness. It struck me as related to my own talk about how humans forget things.
The timing of "forgive and forget" as a system feature rather than a limitation made me laugh. But the insight is important.
Systems that remember everything become brittle and expensive. The reason human memory is selective is that perfect recall would be computationally prohibitive and strategically counterproductive. We need to forget old information to make room for new patterns. We need to forgive past errors to move forward without being trapped by historical decisions.
AI systems are running into this same challenge now.
Context windows keep expanding, but at some point, having unlimited memory creates more problems than it solves. You start getting confused by contradictory information from different time periods. You waste computing resources processing irrelevant historical data.
You make it harder to adapt to new information that contradicts old patterns.
This is why architectural innovations like Skills matter more than just making context windows bigger. Skills let you be selective about what memory you load and when. This is closer to how human cognition actually works.
You don't remember everything about your job simultaneously. You load the relevant mental model when you need it and let the rest stay dormant.
The companies that figure out how to build AI systems with strategic forgetting and selective memory loading will have an advantage over those that try to cram everything into context.

Building for the New Memory Architecture
The acceleration we're seeing in AI adoption means decisions made now about architecture will compound faster than in previous technology cycles. You can't wait for consensus on best practices. The field is moving too quickly. But you also can't afford to guess wrong on foundational bets.
Start by experimenting with Skills or equivalent memory extension approaches. Whether you use Claude's Skills, custom GPTs, or other frameworks, the pattern of separating persistent knowledge from conversational context is going to be standard architecture within a year. Build a skill that codifies how your team works, your product requirements, or your domain expertise. Watch how it changes the quality of AI interactions.
Then, audit where you're currently using AI and identify context waste. Look for places where you're explaining the same background information repeatedly in conversations. Look for prompts that have grown to multiple paragraphs of setup before getting to the actual question. These are candidates for memory offloading through Skills or structured knowledge bases.
Finally, treat AI integration as an interface problem, not a feature problem. Most companies are asking, "Where can we add AI to our product?" when they should be asking, "How do we rebuild our product with AI as the primary interface?"
You need to rearchitect, not replace.
The companies winning on AI aren't adding chatbots to old workflows. They're rethinking the workflow around conversational interfaces with deep domain context.
Your strategy needs to account for this newest TimeShift.
Reply