The summit, scheduled to be held at Bletchley Park, United Kingdom, will have the UK Prime Minister, Rishi Sunak, in attendance, Sky News reports.
The potential threat AI poses to human life itself should be a focus of any government regulation, Members of Parliament warned.
Historically, Bletchley Park, a converted private house that was taken over by the British Secret Intelligence Service in 1938, is where the likes of Alan Turing decrypted Nazi messages during the Second World War.
According to Sky News, the site was crucial to the development of the technology, as Alan Turing and others used Colossus computers to decrypt messages sent between the Nazis.
An MP and chairman of the Science, Innovation and Technology Committee, Greg Clark said he “strongly welcomes” the summit.
Clark said, “The technology is going to be global, And there is some thinking to be done about AI safety across all countries and we should try to explore whether it is possible to have an agreement on this. If this is to be the first global AI summit, to have as many voices as possible would be beneficial.”
The 12 challenges the committee said “must be addressed” are:
1. Existential threat – If, as some experts have warned, AI poses a major threat to human life, then regulation must provide national security protections.
2. Bias – AI can introduce new or perpetuate existing biases in society.
3. Privacy – Sensitive information about individuals or businesses could be used to train AI models.
4. Misrepresentation – Language models like ChatGPT may produce material that misrepresents someone’s behaviour, personal views and character.
5. Data – The sheer amount of data needed to train the most powerful AI.
6. Computing power – Similarly, the development of the most powerful AI requires enormous computing power.
7. Transparency – AI models often struggle to explain why they produce a particular result, or where the information comes from.
8. Copyright – Generative models, whether they be text, images, audio, or video, typically make use of existing content, which must be protected so as not to undermine the creative industries.
9. Liability – If AI tools are used to do harm, the policy must establish whether the developers or providers are liable.
10. Employment – Politicians must anticipate the likely impact on existing jobs that embracing AI will have.
11. Openness – The computer code behind AI models could be made openly available to allow for more dependable regulation and promote transparency and innovation.
12. International coordination – The development of any regulation must be an international undertaking, and the November summit must welcome “as wide a range of countries as possible.”