OpenAI, an independent research body founded by the world’s richest man Elon Musk along with Sam Altman, launched a chatbot, ChatGPT, on Wednesday last week, and in just one week, the service has grown to over 1 million users.
Chatbots have taken the internet by storm. The platform, with its human-like answers and quick replies, has left the internet inquisitive.
What is ChatGPT?
ChatGPT is an interactive dialogue model, a chatting robot, trained by artificial intelligence (AI) and machine learning. It understands and responds to natural human language and answers questions, and talks like you’d talk to a human. It gets its name from GPT or Generative Pre-trended Transformer. It is a deep-learning language model that specializes in generating human-like written text. Deep learning is a machine learning method in which three or more layers of neural networks attempt to emulate the behavior of the human brain, allowing it to learn like humans.
But how is it different from Siri or Alexa who can talk and respond, tell a joke or read a poem? How is ChatGPT different from any other AI model already available?
Well, this one is different because ChatGPT will remember your previous conversations for context, it will admit its mistakes, challenge premises, and sometimes even refuse to answer.
How does ChatGPT work?
A user can start by visiting OpenAI’s website, click on the Try ChatGPT button and start using.
OpenAI trained ChatGPT using a training method known as reinforcement learning from human feedback, or RLHF. It uses a reward and punishment system to train the AI. Therefore, whenever it takes any action, it is classified into two categories – desirable or punishable. Desired actions are rewarded while unwanted actions are punished. With this trial-and-error method, the AI then learns what works and what doesn’t.
OpenAI has also used humans as trainers for this AI. It was through interactions that these trainers played the roles of both the user and the AI assistant. But this training method can be a bit problematic, as it can often mislead the model.
An ideal answer would depend on what the model knows, rather than what the human demonstrator knows..and that may be a limitation of this exciting new thing on the internet.
Therefore, if a user asks a complex question or doesn’t phrase the question well, the bot may refuse to answer. It will also refuse to answer if it is not a proper question.
Vidit Atrei, cofounder of e-commerce company Meesho, took to LinkedIn to dispel the caveat that it was up until a few months ago, everyone thought AI could only do repetitive tasks, but clearly ChatGPT’s Well, it isn’t. AI may also come after creative jobs.
For now, ChatGPT is free to use for the research period only.
CEO Sam Altman has already indicated that the company will look to monetize the platform in the future. “We have to monetize it some way or the other; The cost of computation is eye-watering”, he said in a tweet when asked if the service would be free forever.
We’ll have to monetize it at some point; The cost of computation is eye-watering
— Sam Altman (@sama) December 5, 2022
The average is probably a single digit cent per chat; trying to figure out more precisely how we can optimize this
— Sam Altman (@sama) December 5, 2022