Hey Reader! The internet is filled with ChatGPT failures. From the inability to count the R's in the word strawberry: To being unable to answer simple puzzles correctly: But the bottomline is that this goes way beyond simple things. Go to ChatGPT and type: Generate an image of a clock And you will get something like this: Notice the 10:10 time. ChatGPT almost always generates images at 10:10. Because that is most of the training data. Let's try a different one. Generate an image of a clock at 4:30 pm Identical, right? ChatGPT’s internal clock is stuck on Groundhog Day. And this is a simple thing. Clocks at different times are not part of the training data and, therefore, we can almost say that the inference capability of AI is non-existent. Do you want to see something even funnier? I went ahead and asked the current state of the art OpenAI model to recreate this image and loop the output 100 times. If you are interested in the code, you can find it here: Before you scroll down, tell me what you think the output is:
I will admit, I was not ready for this. But before I show it to you, here is the point of the email AI is a weapon. Weapons alone are useless. Your goal is to become a master of wielding the weapon AND being experient enough in all situations. So, yeah, AI will never replace me. Here is the gif: If the model can’t make me hotter after 100 tries, my gf is safe. And what about you, are you still afraid of being replaced? To being irreplaceable, Diogo |
Stay on top of Data Science, AI, and Analytics trends—sign up now for bite-sized insights. Join 40k+ people and boost your career with fresh strategies, top tools, and insider knowledge.
Hey Reader! I poked a few Model Context Protocol servers this week and stumbled on one with zero security. If you’re using (or about to use) MCP, watch this: 👉 Hacking MCP: We Found a Server with ZERO Protection! You’ll see the exact test I ran and the two fixes you can copy-paste today to keep your own endpoint off the hacker menu. Catch you in the comments,Diogo
Hey Reader, Apple just rolled out a deep dive that claims our smartest chatbots hit a brick wall when puzzles get tough. The paper—titled “The Illusion of Thinking”—fans out six large models and asks if slow “thinking” text actually helps them solve logic games. I read the whole thing so you don’t have to, and here’s the story in plain talk. HOW THE EXPERIMENT WAS SETUP Apple picks puzzles with clear rules like Tower of Hanoi, river crossings, the 24 Game, and simple path-finding mazes. They...
Hey Hey! Berlin’s in full-tilt summer mode—street raves, open-air cinemas, midnight kebabs. I kept pace and shipped a MASSIVE stack of course upgrades. Jump in, then jump outside. Ninja Haircut GIF RAG, AI Agents & Generative AI with Python and OpenAI 2025 NEW IN MAY: RAGAS + RAG deep dive with all the latest OpenAI tricks—done and waiting COMING IN JUNE: New sections on OpenAI Image Generation + Reasoning Models AI Agents For All! Build No-Code AI Agents & Master AI 2025 MAY REMAKES:...