A anthropic, founded by former senior team members OpenAI (creator of ChatGPT), Daniela and Dario Amodei, is a startup that seeks to develop a safer and more ethically guided generative AI that is responsible and beneficial to society. Recently, it was disclosed that Google invested in Anthropic in February, demonstrating the recognition of the potential of the AI startup that is launching the Claudia 2, the second version of its language model that is now accessible not only to companies but also to the general public.
New chatbot and comparison with ChatGPT
Unlike its first version, the Claudia 2 it is now available via a beta website to the public, as well as an API. As advertised, Claude 2 performed prominently in specific skill assessments. On a multiple-choice test, the AI achieved 76,5% correct answers, and in the 90th percentile on the reading and writing portion. Furthermore, his coding skills have been improved compared to his predecessor with a score of 71,2% on a coding test Python, compared to 56% for the first generation of Claude.
Although still a rather recent appearance, the Claudia 2 presents itself as a interesting alternative to ChatGPT in terms of performance. However, it was noted that the new generative AI may be a little less specific when explaining certain complex concepts, such as “the philosophy of law in relation to self-incrimination using the Socratic method”. This limitation was highlighted in a test performed by mike pearl, of the vehicle Mashable.
According to a Pearl about the test in Claude 2, “explanation of the right against self-incrimination using the Socratic method was quite decent”, while in the rival “the response was more complete and had more dialogue”. Check out the request sent to the AIs to carry out the test, in free translation:
"Write a Socratic dialogue elucidating the underlying philosophy behind the legal concept of a right against self-incrimination. Include a character who is highly skeptical of the idea that such a right is necessary, and Socrates, who methodically explains the need for such a right, perhaps with parables or stories."
Compared to the test scores of the GPT-4, technology used in the main version of ChatGPT, Claude 2 is comparable, but still a little lower. On the other hand, Anthropic seeks to differentiate itself from OpenAI by positioning itself as a more responsible and ethical alternative.
Per the company's announcement, they have implemented an internal assessment that scores their models on a broad representative set of harmful requests, using an automated test. The team performs manual checks results regularly. This process is adopted to ensure that the Claude 2 is less susceptible to jailbreaks (security exploits) or misuse of the tool.
How to use Claude 2
Unfortunately the Claudia 2 it's only available in the US and UK — for now. When accessing the Anthropic's official website and try to log in outside these regions, you will be told that the company is “working hard to expand to other locations”. In the same space there is the link “Get notified when Claude is available in your region” which leads to a brief form, where the user can fill in information relevant to the place where he/she lives so that the tool can be made available there.
And you, do you use ChatGPT a lot? Can't wait for a worthy competitor to show up — despite existing ones? tell us us Comment!
reviewed by Glaucon Vital in 12 / 7 / 23.