2 MIN

Just three weeks ago, I was amazed by its capabilities, and today? I wait 2-3 minutes for a response whose quality resembles older GPT generations.

What went wrong?

As usual, when big tech announces breakthrough premieres or Sam Altman tweets about the massive use of o1-pro (while OpenAI is subsidizing it), it's worth looking deeper. So I looked into the documentation of the o1 family models and discovered new parameters, such as "reasoning-effort." When in the consumer version, where you can't manually control them, there's a noticeable decline in reasoning quality - the conclusions are obvious.

This is part of a broader game in financial capitalism. Notice how individual companies are making increasingly crazy forecasts and taking specific market positions:

  1. OpenAI = AGI
  2. Google = quantum, multimodality
  3. Meta = OpenSource
  4. X = freedom of speech
  5. Anthropic = B2B
  6. Apple = privacy

This is a classic marketing game - positioning and securing your slice of the market while offering very similar solutions. In the end, everyone will have similar products, and smaller companies will specialize them.

In all of this, I appreciate Anthropic's approach with their Claude the most. They transparently inform about server overload, automatically switch to "concise" mode with the option to manually change it. From the beginning, they focus on merit and solid execution.

Fortunately, after canceling all subscriptions, I use #AION from Automation House, integrated with our wiki in ClickUp. When one model fails, I simply switch to another in this technologically agnostic AI assistant.

But if I had to pay for one subscription today, it would be Claude. And you, what will you choose?