I'm coming to the conclusion that the gap between systematic and casual users will only deepen.
Example? I recently returned to the "old" Claude 3.5 Sonnet, expecting a drop in quality compared to the newer o1-pro for $200. I was surprised at how well it "understood" nuances and delivered accurate answers with little effort, thanks to the long hours spent in the summer refining knowledge and my preferences.
Lately, I've been very focused on developing my own know-how of building context for AI. Interestingly, I'm working on this with AI, and I also designed my work process with AI.
Meanwhile, I feel that on LinkedIn everyone is competing in criticizing the genericity of content prepared by AI, highlighting why "another model doesn't work." Dude, it's not that the model doesn't work. It's that you don't understand how and what to use it for :)
To illustrate the idea of building context, imagine that every query to an LLM initially knows your context:
Private:
- History of decisions and their consequences
- Most important relationships, goals, and values
- Summaries of consecutive weeks
Business:
- Knowledge base about products and services in the company wiki
- Documentation of processes and standards
- Guidelines for leadership or sales frameworks
The effect? The model, whether new or old, delivers ultra-personalized recommendations: who to engage in specific decisions, what to pay attention to, what risks to consider.
This isn't about whether you use free AI or pay $200. It's as if your first tennis game was to be decided by a carbon racket with Rafael Nadal's signature. You're probably at least a dozen frustrating training sessions away from your first game.
There's only one difference. The ability to play tennis won't determine your professional future, but the ability to work with AI already does :)
So, shall we exchange practices for building context for our AI?