### OpenAI’s Commitment to AI Safety
#### Showcasing New Research
OpenAI is making strides to demonstrate its commitment to AI safety. Recently, the company highlighted research aimed at helping researchers better scrutinize AI models, even as these models become more advanced and useful.
#### Founding Principles and Current Criticism
OpenAI was established with the goal of making AI more transparent and safer. However, following the success of ChatGPT and increased competition from well-funded rivals, some critics argue that the company is focusing more on flashy advancements and market share rather than safety.
#### Expert Opinions on AI Safety
“The situation we are in remains unchanged. Opaque, unaccountable, unregulated corporations racing each other to build artificial superintelligence, with basically no plan for how to control it.”
Daniel Kokotajlo, a former OpenAI researcher, emphasizes that while the new research is important, it is only incremental. He stresses the need for more oversight of companies developing AI technology.
#### Need for External Oversight
An anonymous source familiar with OpenAI’s internal operations also highlights the necessity for external oversight of AI companies. They question whether OpenAI is genuinely committed to the processes and governance needed to prioritize societal benefit over profit, rather than just allowing some researchers to focus on safety.
### Conclusion
OpenAI’s recent efforts to showcase AI safety research are a step in the right direction. However, experts and insiders alike call for more comprehensive oversight and governance to ensure that the development of AI technology benefits society as a whole.
1 Comment
We should definitely be pushing them for more than just research!