2 MIN

I was reading with surprise the circulated opinions from the same few sources claiming that o1pro is "nothing special." However, remembering how impactful my earlier experiences with LLMs were, I had to test the most intelligent model myself.

Conclusions?

The qualitative leap I experienced can only be compared to switching from GPT-4 to Claude 3.5 in August this year.

Just in the first day:

  • I sharpened the profile of our ideal client, something I've been struggling with for months,
  • I designed points of contact between Tigers, Automation House and #22community,
  • I created several AI assistants that will streamline my work,
  • I processed 22 weeks of my own AI Journaling, receiving incredibly high-quality analysis of my behavior patterns and thoughts.

The last part was the most interesting, as I confronted my behavioral profile with my company's strategy and received recommendations hitting sensitive points about what I need to be particularly careful about.

Also thanks to the prompt that Przemysław shared on Slack with a group of people regularly using AI in journaling, for which I'm grateful!

The o1pro tests I've seen provided poor context and used average prompts. Whereas this model serves for qualitative, complex, multi-threaded reflections. It's also the first model that I couldn't "wear out" over the weekend within a single, multi-threaded chat packed with information. But I haven't said my last word yet ;) No other LLM, including the new "regular" o1, matches it in connecting dots while maintaining iron discipline in keeping the thread and considering broad context, including our numerous remarks. It has a feeling of real analysis, not harping on the same points and spitting out very basic conclusions.

On the other hand, the expense for o1pro is considerable. So if you have questions and want to avoid disappointment, I'm happy to help.

PS: That feeling when o1pro processes dozens of pages of your AI journal for 2.5 minutes and you watch the progress bar...