In December 2022, Twitter CEO Elon Musk announced the company’s “Trust and Safety” advice on online safety, despite increasing reports of hate speech on the platform. & Safety) Council” was dissolved.
The decision sparked a heated debate in Silicon Valley over the purpose of the “trust and safety” team. Many perceive these organizations to stand against progress, slowing product innovation and introducing onerous rules and hurdles. Especially in a world that lives by Mark Zuckerberg’s infamous motto, “Move fast, destroy.”
Others, however, have devoted their careers to the opposite—proving that trust and security are “features” rather than “bugs.” That person is Daniela Amodei, co-founder and president of Anthropic, which is seen as a rival to OpenAI.
“It’s a matter of organizational structure, but it’s also a matter of mindset. So if all other teams see trust and safety as equal partners, I don’t think there will necessarily be conflict,” Amodei said.
Anthropic co-founder and president, Daniela Amodei.
Of course, Amodei’s Anthropic and Musk’s Twitter are two very different things. But whether it’s AI or social media, they all have a similar need to deal with the question of who polices technology and decides what values are “right” or “wrong.” .
In recent months, AI has become as hot a topic as social media. As technology advances at an accelerated pace and related startups are established, so too does the interest of investors, founders and the general public alike.
AI research Anthropic benefits from this hype. The company has already raised more than $1 billion (approximately 135 billion yen, equivalent to 135 yen to the dollar), and according to PitchBook, the most recent round, the second in 2023, was Spark Capital. It has raised $300 million from (Spark Capital) at a valuation of $4.1 billion, reports The Information.
Now, Amodei and its Anthropic peers are working to ensure that trust and security are at the heart of this new AI era, not an afterthought.
off the beaten path
Amodei’s journey into tech has been truly unconventional.
Amodei originally started his career in the fields of global health and politics. He helped win the election for the House of Representatives in northeastern Pennsylvania, but after spending a few months in Congress managing the schedule and communications for Rep. I gave up if it didn’t fit.
In 2013, he turned to tech, joining payments startup Stripe, which had just 40 employees at the time. In 2018, he moved to OpenAI. In both companies he held roles centered around people, risk and safety, which became the most important elements throughout his career.
In 2020, Amodei left OpenAI with six employees, including his brother Dario Amodei, to launch rival Anthropic.
This decision was controversial. A former OpenAI employee told The Wall Street Journal that Dario Amodei, who was OpenAI’s lead safety researcher at the time, was also happy to perform safety tests by contracting with Microsoft. He said he was concerned that he would be forced to release a product before he could, and that he would be too close to Microsoft.
Speaking to Insider, Daniela Amodei said OpenAI’s product was in the early stages of development at the time she was there, so she can’t comment on it fully. “We had a vision of a very small, cohesive team working on research at the same time.”
Anthropic already seems to be moving towards realizing that vision. In March, it released Claude, a highly maneuverable conversational AI that competes with OpenAI’s ChatGPT. Notion and Quora have already adopted Claude.
safety first
While many AI companies now claim to focus on safety, Amodei says Anthropic’s commitment to safety goes beyond lip service.
Safety is a value that should be built into every step of the research process, not just the final stage, says Amodei.
Anthropic itself bases its research on the “triple H” framework of helpful, honest and harmless. Specifically, when giving feedback to model output and reinforcement learning, or when building “constitutional AI” (human-prepared models trained with rules that encourage AI to be transparent and harmless), It uses a variety of people and perspectives.
The AI will be able to supervise itself according to these rules and determine whether a model’s output meets the “triple H” framework without much human involvement, Amodei explains.
In addition, Anthropic publishes its safety studies in hopes that it will be used by other institutions, ranging from academic laboratories to government agencies.
Amodei thus believes that reliability and safety are now product requirements for customers, but acknowledges that for profit-seeking companies, there is a thorny trade-off between speed and safety. . But by prioritizing these values from the start, companies can avoid being bogged down by unpredictable crises and stay agile, says Amodei.
look to the future
Founded with the vision of a “small, cohesive team” in mind, Anthropic already has more than 100 employees, according to LinkedIn, and has grown by more than 50% in the last six months.
Anthropic has maintained a multidisciplinary culture even as it grows, and Amodei says the backgrounds of its employees range from physics to computational biology to policymaking. So did the founders, co-founder Jack Clark, a Bloomberg tech journalist who turned to the AI industry.
“We are a little different than traditional tech companies,” says Amodei.
For Amodei, the challenge of building AI securely requires a near-impossible effort to predict the future.
“We all need to look at the problems of the present, and at the same time seriously think about how to achieve controllable progress. And we also need to look at the problems of the future that will come.”
But even in a situation full of uncertainty, there is still much to be excited about.
“Even though I’m immersed in this field, I still feel like I’m on the cusp of something big that could change the way we communicate and engage with people.”
[original text]
(Edited by Ayuko Tokiwa)
Source: BusinessInsider
Emma Warren is a well-known author and market analyst who writes for 24 news breaker. She is an expert in her field and her articles provide readers with insightful and informative analysis on the latest market trends and developments. With a keen understanding of the economy and a talent for explaining complex issues in an easy-to-understand manner, Emma’s writing is a must-read for anyone interested in staying up-to-date on the latest market news.