### The Birth of Self-Attention in AI
In 2016, Google engineer Illia Polosukhin had a pivotal lunch with his colleague, Jacob Uszkoreit. Frustrated by the slow progress in his AI project aimed at providing useful answers to user questions, Polosukhin was introduced to a new technique called self-attention by Uszkoreit. This idea sparked an 8-person collaboration, culminating in the 2017 paper “Attention is All You Need,” which revolutionized artificial intelligence.
### Concerns Over Transparency
Eight years later, Polosukhin is uneasy about the current state of AI. A staunch advocate for open source, he is troubled by the secretive nature of transformer-based large language models, even from companies that claim to value transparency. He points out that we often don’t know what data these models are trained on or their weights, making it impossible for outsiders to experiment with them. While Meta claims its systems are open source, Polosukhin disagrees:
The parameters are open, but we don’t know what data went into the model, and data defines what bias might be there and what kinds of decisions are made.
### The Dangers of Profit-Driven AI
As LLM technology advances, Polosukhin fears it will become more dangerous, driven by the need for profit. He warns that companies will justify the need for more funds to train better models, which could be used to manipulate people and generate revenue more effectively.
### Skepticism About Regulation
Polosukhin has little faith in regulation as a solution. He believes that setting limits on these models is so complex that regulators will have to depend on the companies themselves. He doubts that many people, even engineers, can effectively assess model parameters and safety margins, let alone policymakers in Washington, DC.
### Risk of Regulatory Capture
This complexity makes the industry susceptible to regulatory capture. Polosukhin notes that larger companies know how to manipulate the system by placing their own people on regulatory committees, ensuring that the “watchers are the watchees.”
### The Case for Open Source AI
Polosukhin advocates for an open source model where accountability is built into the technology. Before the 2017 transformers paper was published, he left Google to start the Near Foundation, a blockchain/Web3 nonprofit. His company is now semi-pivoting to apply principles of openness and accountability to what he calls “user-owned AI.” This approach would use blockchain-based crypto protocols to create a decentralized, neutral platform.
Everybody would own the system. At some point you would say, ‘We don’t have to grow anymore.’ It’s like with bitcoin—the price can go up or down, but there’s no one deciding, ‘Hey, we need to post $2 billion more revenue this year.’ You can use that mechanism to align incentives and build a neutral platform.
### Near Foundation’s Initiatives
Developers are already using Near’s platform to create applications based on this open source model. Near has launched an [incubation program](https://near.org/blog/near-foundation-launches-ai-incubation-program) to support startups in this endeavor. One promising application is a system for distributing micropayments to creators whose content is used to train AI models.
6 Comments
Why does it always have to be “pioneers” and “preserve”? Sounds dramatic.
Defensive much?
Is preserving its future a pretext for more control?
Another scheme to monopolize the tech world!
Nova: Just another tech hype or something more profound?
Just another ambitious goal, or will it actually make an impact?