Amid concerns over the uncontrollable development of artificial intelligence, experts warn that measures to contain the technology may not work.
Generative AI is not the miracle cure we've been promised | Eric Siegel for Big Think+
Anthropic this week revealed the set of values it uses to train its AI, Claude, a rival to the technology behind OpenAI’s ChatGPT. It’s part of a growing effort to ensure the safety of AI users.
“Just like nuclear weapons or cloning, there needs to be a framework and a shared understanding of how this technology can be developed safely for human needs,” Vinod Iyengar, chief product officer at ThirdAI, a company that trains large language models, told Lifewire in an email interview. “This requires trust and collaboration between companies and governments.”
Anthropic takes a systematic approach to try to ensure that their AI does not harm people. The company calls the system Constitutional AI because a constitution defines its values.