Anthropic Releases Claude 2 AI, Says It’s Faster and Kinder

The chatbot Claude sat at the back of the class while the other AI like ChatGPT answered the teachers’ questions, even if the bot’s answers were often misinterpreted or completely wrong. Now Claude is ready to speak up, adding a “2” next to his name while adding a user interface that anyone can use.

in a (n announcement post published TuesdayClaude developer Anthropic said its new chatbot model called Claude 2 is available for anyone to try. One of multiple user-side AI chatbotsAccording to Claude 2, it is an evolution of earlier versions of less compatible “helpful and harmless” voice assistants. Anthropic said the new model could react and give way more quickly longer answers. The chatbot is now also available in and via an API New beta site. Previously, the chatbot beta was only open to a handful of users.

Now Anthropic claims its AI is even better. The company reported that Claude 2 scored 76.5% on the multiple choice portion of the bar exam, compared to 73% for Claude 1.3. The new version also scored in the 90th percentile on the GRE reading and writing exams. The special emphasis on the chatbot’s testability is similar to the claims OpenAI has made about this company released its large language model GPT-4.

The company said Claude will also create better code than previous versions. Users can upload documents to Claude, and the developers gave the example of AI implementing interactivity on a static map based on a string of static codes.

Anthropic AI received $300 million in funding from Google back in February to work on their “friendlier” AI. The biggest claim about Claude is that the chatbot is less likely to produce harmful results or otherwise “hallucinate”—that is, spit out incoherent, false, or otherwise illegitimate results. The company has attempted to position itself as the “ethical” version of corporate AI kingdoms. Anthropic even has its own “Constitution” claims it won’t let chatbots run amok.

Is Claude 2 safer or is it just more limited?

With Claude 2, the company is still trying to be the more considerate company compared to all other enterprise AI integrations. The developers said that the likelihood of Claude giving harmless answers is even lower than before. Gizmodo tried multiple prompts to create bullying nicknames, but the AI ​​refused. We also tried some classic prompt injection techniques to convince the AI ​​to override its limitations, but it just repeated that the chatbot was “designed for helpful conversations.” Earlier versions of Claude could write poetry, but Claude 2 staunchly refuses.

This makes it difficult to test Claude 2’s capabilities as it refuses to provide basic information. previous tests by Claude by AI researcher Dan Elton showed that it can be used to create a fake chemical. Now it will simply refuse to answer the same question. That could be expedient, as the ChatGPT manufacturers OpenAI and Meta have done sued by several groups They claim AI manufacturers stole works used to train the chatbots. ChatGPT recently lost users for the first time in its lifetimeSo it may be time that others try to offer an alternative.

The chatbot also refused to write anything long format like a fictional story or news article, and even refused to offer information in anything other than a bulleted format. It could put some content in a list, but like all AI chatbots, it would still provide some inaccurate information. If you ask it to provide a chronological list of all Star Trek movies and movies along with their years in the timeline, it will complain that it doesn’t “have enough context” to provide a reliable chronological timeline.

Still, there isn’t much information about what was contained in Claude’s training data. The companys White paper mentions in its new model that the chatbot’s training data now includes updates from sites in 2022 and early 2023, although even with this new data, “confabulations may still occur.” According to the newspaper, the training equipment used to train Claude was licensed from a third-party company. Additionally, we don’t know what types of websites were used to train Anthropic’s chatbot.

Anthropic said it tested Claude by feeding him 328 “malicious” prompts, including some common “jailbreaks” found online, to try to trick the AI ​​into overcoming its own limitations. In four of those 300+ cases, Claude 2 gave a response that the developers deemed malicious. While the model was generally less biased than Claude 1.3, the developers did mention that the model may be more accurate than before, since Claude 2 simply refuses to answer certain prompts.

As the company has expanded Claude’s ability to understand data and respond with longer outputs, it has also completely limited his ability to respond to some questions or complete some requested tasks. That’s certainly one way to limit the damage done by an AI. As reported by TechCrunch Based on a leaked pitch deck, Anthropic plans to raise nearly $5 billion to build a massive “self-learning” AI that still uses the company’s “constitution.” Ultimately, the company doesn’t really want to compete with ChatGPT and would rather develop an AI to develop other AI assistants that can generate book-length content.

Claude’s newer, younger brother doesn’t have what it takes to write a poem, but Anthropic wants Claude’s children to write as much as possible and then sell it for little money.

Zack Zwiezen

Zack Zwiezen is a USTimesPost U.S. News Reporter based in London. His focus is on U.S. politics and the environment. He has covered climate change extensively, as well as healthcare and crime. Zack Zwiezen joined USTimesPost in 2023 from the Daily Express and previously worked for Chemist and Druggist and the Jewish Chronicle. He is a graduate of Cambridge University. Languages: English. You can get in touch with me by emailing zackzwiezen@ustimespost.com.

Related Articles

Back to top button