An open letter signed by a magnitude of technology experts has called for a six-month pause on the “dangerous race to ever-larger unpredictable black box models with emergent capabilities”.
The document is instead encouraging the facilitation of greater understanding surrounding the “profound risks to society and humanity” posed by advancements to artificial intelligence (AI).
Describing AI as potentially representing a “profound change in the history of life on Earth”, and therefore, should be managed with immense care and resources, the letter, which includes signatures from Elon Musk, chief executive officer of Twitter and Tesla, and Steve Wozniak, co-founder of Apple, is calling on AI labs the world over to “use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design.”
It called for the break to become “public and verifiable, and include all key actors,” adding that if such action cannot be swiftly taken, then “governments should step in and institute a moratorium.”
The open letter follows events in recent months involving “AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”
With contemporary AI systems “becoming human-competitive at general tasks,” the letter’s authors call upon individuals to pose several questions to the technology including;
- Should we let machines flood our information channels with propaganda and untruth?
- Should we automate away all the jobs, including the fulfilling ones?
- Should we develop non-human minds that might eventually outnumber, outsmart, obsolete, and replace us?
Given the relatively new landscape offered by the latest AI developments, the authors didn’t call for a culling of the technology, but rather declared “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
The document cited a recent statement from OpenAI — the company behind the recently revealed ChatGPT-4 software capable of accepting text or image inputs — regarding artificial general intelligence, which read:
“At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of computers used for creating new models.”
For any protocols developed during the development break, the experts said those people behind new innovations must “ensure that systems adhering to them are safe beyond a reasonable doubt”.
They also considered that any research and development endeavours conducted during this period should be “refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”
“In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems,” the letter also read.
Such governance was suggested to include, amongst other requirements, “new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability.”
Such systems should contain provisions related to “provenance and watermarking systems to help distinguish real from synthetic, and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions that AI will cause.”
The letter concluded that coexistence between humanity and AI is possible, and that the pair can “enjoy a flourishing future”.
That’s as long as AI developers don’t “rush unprepared into a fall.”
You are not authorised to post comments.
Comments will undergo moderation before they get published.