### âOpenAI’s â˘Commitment toâ AI Safety
#### Showcasing New Research
OpenAI is making strides to⢠demonstrate its commitment âto AI safety. Recently, â¤the company âhighlighted âresearch aimed at â˘helping âŁresearchers âŁbetter scrutinize AI models, â˘evenâ asâ these models become more advanced and useful.
#### Founding Principles and â¤Current Criticism
OpenAI was⤠established with the goal⢠of making AI more transparent and safer. However, âfollowing the success⢠of ChatGPT and increased competition from well-funded rivals, some âcritics argue that âthe company is focusingâ more⤠on flashy advancements and marketâ share rather than safety.
#### Expert Opinions on AI Safety
âThe situation we areâ in remains â˘unchanged. Opaque,â unaccountable,â unregulated corporations racing each other to âŁbuild artificial superintelligence,⣠with basically no plan for how to control⤠it.â
Daniel Kokotajlo, a former OpenAI researcher, emphasizes that while the new research is⣠important, it isâ only incremental. He stresses the need for more oversight of companies developing AI technology.
#### Need forâ External Oversight
An anonymous source familiar with OpenAI’s internal operations âalso highlights the necessity for external oversight of âŁAI companies. âŁThey question whether OpenAI is genuinely committed to the âprocesses and governance âŁneeded to prioritize societal benefit over profit, rather than just âallowing some researchers to focus on safety.
### Conclusion
OpenAI’s recent efforts to showcase AI safety research are a step in the right direction. However, experts and insiders alike callâ for more comprehensive oversight and governance toâ ensure that the developmentâ of AIâ technology benefits⣠society as a whole.
1 Comment
We should definitely be pushing them for more than just research!