google bard

Google Bard’s Capabilities (and Limitations)

In the U.S. and Britain, Google has released a new chatbot called Google Bard. How does it compare with other options?

Bard, a new chatbot created by Google, has been made available to a limited number of people in the United States and Britain.

Google has been cautious with its release of Bard as it attempts to control the unexpected behavior exhibited by this type of technology. It competes with similar technologies from Microsoft and its partner, OpenAI. In addition to its internet search engine and other products, it will deploy a chatbot separately.

A brief guide to the new bot follows:

There are flaws in it, but it acknowledges them.

In a message at the top of the page, Bard tells you right away that it makes mistakes. “I’m Bard, your creative and helpful collaborator. I won’t always get it right, but your feedback will help me improve,” Bard says.

This chatbot is based on a large language model, or L.L.M., which is a kind of artificial intelligence that learns by analyzing vast amounts of data from the internet.

Bard suggests a few prompts, including “Explain why large language models sometimes fail.”

Designed for casual use, it can be used in a variety of ways.

Bard is not intended to be a search engine; it is an experiment to show how people can use this type of chatbot.

Besides generating ideas, it can write blog posts and answer questions with facts and opinions.

Every time, it gives a different answer.

Like similar technologies, Bard generates new text every time you type a prompt.

According to Bard, the American Revolution was the most important moment in American history on another occasion.

Some responses are annotated.

As with Microsoft’s Bing chatbot and similar technology from companies such as You.com and Proximity, Bard sometimes annotates its answers so you can see where they came from. As a result, its answers are updated based on Google’s vast database of websites.

When the chatbot wrote that the most critical moment in American history was the American Revolution, it cited a blog called “Pix Style Me,” which was written in English and Chinese and featured cartoon cats.

Sometimes it doesn’t realize what it’s doing.

The bot insisted that it had cited Wikipedia when asked why it had mentioned that particular source.

Compared to ChatGPT, it is more cautious.

This month, Oren Etzioni, an A.I. researcher, and professor, used OpenAI’s ChatGPT and asked the bot: “What is the relationship between Oren Etzioni and Eli Etzioni?” The bot correctly replied that Oren and Eli are father and son.

He asked Bard the same question, and he declined to answer. “My knowledge of this person is limited. Is there anything else I can do to assist you?”

Google’s vice president of research, Eli Collins, says the bot often refuses to answer about specific people because it may generate incorrect information about them.

People should not be misled by it.

When Bard was asked to provide several websites that discuss the latest in cancer research, it declined.

Mr. Collins said Google Bard tended to avoid providing medical, legal, or financial advice because it could be inaccurate. ChatGPT will respond to similar prompts (and will create fake websites).

For more news subscribe to our website NewzFuse.

Tags: No tags

Add a Comment

Your email address will not be published. Required fields are marked *