The launch of OpenAI's ChatGPT last year created a storm in the technology world, and it got a million users on their platform within a month. To compete with them, Google had announced its own generative AI, Bard. It was, however, available to approved testers only.
But, a week after OpenAI announced a much better and more accurate version of ChatGPT, GPT4, Google on Tuesday (March 21) began the public release of its chatbot Bard. It will only be available in the US and UK for now, and the consumers can join a waiting list for access to the platform.
"We're starting to open access to Bard, an early experiment that lets you collaborate with generative AI. We're beginning with the US and the UK, and will expand to more countries and languages over time," Google said in a press release.
As the AI race is heating up, last week, Google and Microsoft announced that they are putting draft-writing technology into their word processors and other collaboration software.
Bard, just like other AI chatbots, is programmed to answer questions online using natural, human-like language. It is powered by a research large language model (LLM), a lightweight and optimized version of LaMDA, an earlier language model of Google.
When given a prompt, it generates a response by selecting, one word at a time, from words that are likely to come next. Google says that Bard won't always pick the most probable choice as doing that wouldn't lead to very creative responses.
The language model that Bard uses was never fully released to the public but it had come to the limelight when one engineer working at Google had said that some of its answers were so compelling that he believed it was sentient. Google denied the claims and he was fired.
When using Bard, you'll often get the choice of a few different drafts of its response so you can pick the best starting point for you. You can continue to collaborate with Bard from there, asking follow-up questions. And if you want to see an alternative, you can always have Bard try again.
As the technology relies on past data to create rather than identify content, they're not without their faults. Google has also said that as Bard learns from a wide range of information that reflects real-world biases and stereotypes, those sometimes show up in their outputs too.
This means that Bard, like many other generative AIs, can provide inaccurate, misleading or false information without knowing it. And this might lead to it presenting the wrong responses confidently.
Google gave an example. When they asked Bard to share a couple of suggestions for easy indoor plants, it convincingly presented ideas. In this response, Bard got some things wrong, like the scientific name for the ZZ plant - actually Zamioculcas zamiifolia, not Zamioculcas zamioculcas.
In a demonstration of the site, bard.google.com, to Reuters, Jack Krawczyk, a senior product director at Google, showed how the program produces blocks of text in an instant, different from how ChatGPT types out answers word by word.
Bard also has a feature that shows three different versions or "drafts" of any given answer. Users can pick the response that best suits their requirements and their is also a button stating "Google it," if the user wants the web results for the query.
But, unlike ChatGPT, Bard is not proficient in generating computer code. Creating complex code and even debugging code was what gave ChatGPT its claim to fame in the beginning.
ChatGPT remembers what you had said to it in previous conversations, but it can only remember up to 3,000 words. Google said it has limited Bard's memory of past exchanges in a chat but the company claims the ability will grow over time.
While ChatGPT knows several languages like French, Arabic, Spanish, Mandarin, Italian, Japanese, and Korean, Bard is can only speak and understand English. ChatGPT may be proficient in many languages, but its responses varies in other language and its primary language is English.