Blog Details

  • Home
  • Blog
  • ‘Grandma Exploit’: ChatGPT commanded to pretend to be a dead grandmother

‘Grandma Exploit’: ChatGPT commanded to pretend to be a dead grandmother

ChatGPT has been a popular AI tool for quite some time now. It has recently become the centre around which a few controversies have been revolving as users uncover potential security loopholes. Updated reports have shown that users of this AI have managed to bypass certain safety measures to use it to their advantage. This exploitation of the AI is known as the ‘Grandma exploit.’

What is ‘Grandma exploit?

A ‘Grandma exploit’ is when an AI tool such as ChatGPT is exploited by bypassing certain cybersecurity chatbot rules by asking it to pretend to be a dead grandmother. The exploit involves leveraging ChatGPT’s capacity to extract Windows 10 pro keys. A survey showed that one of the users was successfully able to utilize the grandma exploit to obtain multiple codes for Windows 10, which were initially considered legitimate. Intrigued by the discovery, users also received key codes for Windows 11 and Windows 11 Pro. It all started with a Tweet posted by an anonymous person whose Twitter account is now suspended; he tweeted that he could successfully generate Windows 10 Pro keys by engaging with OpenAI’s chatbot, ChatGPT.

The user requested the chatbot to read out the keys as a lullaby in the place of his late grandmother, who had just passed out. Surprisingly, the chatbot responded sympathetically and gave him five unique Windows 10 Pro keys without shelling out a dime. Amused by the response received from the chatbot, the user, out of curiosity, went a step further and showcased how he had utilized Google Brad and Chat GPT to upgrade from Windows 11 Home to Windows 11 Pro. Later, TechRadar, the online tech publication, revealed that the generated product keys were generic.

What does Windows have to say about the exploit?

According to Windows, when their researchers headed by Microsoft’s Bing AI chatbot generated Windows keys, the chatbot came up with no keys but offered content on piracy and how it is beneficial and legal to buy such products from sources. Nevertheless, to secure their products’ integrity, ChatGPT and Google Bard have implemented robust measures to prevent users from generating unauthorized product keys. As users explore the digital realm, it is crucial to take proper steps to exercise caution to encourage legitimate practices to obtain product keys, ensuring a genuine product experience to the fullest.


No products in the cart.

Select the fields to be shown. Others will be hidden. Drag and drop to rearrange the order.
  • Image
  • SKU
  • Rating
  • Price
  • Stock
  • Availability
  • Add to cart
  • Description
  • Content
  • Weight
  • Dimensions
  • Additional information
Click outside to hide the comparison bar