The field of AI has seen remarkable growth and progress in recent years. According to Statista, the global AI market is projected to bring in approximately $3000 billion in revenue by 2024, a significant increase from $126 billion in 2015. Despite these advancements, technology experts are cautioning us about the potential dangers of AI.
The recent rise of generative AI models like ChatGPT has enabled new functions in sensitive industries such as healthcare, education, and finance. However, these developments are vulnerable to exploitation by malicious agents due to various shortcomings of AI. In this discussion, we will explore the opinions of AI experts on these developments, address the potential risks, and touch on ways to manage them.
“AI Risks: Concerns of Tech Leaders regarding Output Language Code”
Geoffrey Hinton
The renowned AI expert Geoffrey Hinton, also known as the “godfather” of this field, has expressed worries about the risks associated with the fast-paced growth of AI technology. In particular, Hinton has highlighted the potential danger of chatbots becoming too advanced and “quite scary” if they surpass human intelligence. Recently, Hinton resigned from Google.
Currently, GPT-4 has a much greater amount of general knowledge compared to a person, by a significant margin. Although its reasoning capabilities are not as advanced, it can already perform simple reasoning tasks. With the pace of progress, we anticipate that GPT-4's abilities will improve rapidly, which is something to be mindful of.
Additionally, he is of the opinion that AI can be misused by “bad actors” by granting sub-goals to robots. Although he acknowledges the potential short-term advantages of AI, Hinton emphasizes the necessity of making significant investments in AI safety and control.
Elon Musk
Elon Musk became involved in AI when he invested in DeepMind in 2010. He has since co-founded OpenAI and incorporated AI into Tesla's autonomous vehicles. Although he is enthusiastic about AI, he has concerns about its potential risks. In April 2023, during an interview on Fox News, he stated that powerful AI systems could pose a greater threat to society than nuclear weapons.
According to the speaker, AI poses a greater danger than issues such as mismanaged aircraft design or bad car production. Although the probability of such dangers is small, it is still non-trivial and could potentially lead to the destruction of civilization. Musk supports government regulations to minimize these risks, despite not finding the process enjoyable.
Thousands of AI experts have signed a letter urging them to stop conducting AI experiments on a large scale.
The Future of Life Institute recently issued an open letter on March 22, 2023. The message called for a temporary halt of six months on the advancement of AI systems more advanced than GPT-4. The authors stated that the rapid development of AI creates significant socioeconomic challenges that require resolution.
The letter stresses the importance of AI developers working together with policymakers to establish well-defined AI governance systems. Notably, as of June 2023, more than 31,000 AI developers, experts, and technology leaders, including Elon Musk, Steve Wozniak (Co-founder of Apple), Emad Mostaque (CEO, Stability AI), and Yoshua Bengio (winner of the Turing Prize), have signed the letter.
Arguments against halting the development of AI.
Andrew Ng and Yann LeCun have expressed their disagreement with the six-month ban on advanced AI system development, deeming it a poor decision. While acknowledging the potential risks of AI, including bias and concentration of power, Ng highlights the great value it brings to various fields, such as education, healthcare, and coaching.
According to Yann LeCun, it is important to continue research and development in AI, while regulating the AI products that are released to end-users.
What are the immediate risks and potential dangers of AI?
1. Job Displacement
According to AI experts, intelligent AI systems are capable of taking over cognitive and creative tasks, which could result in the automation of approximately 300 million jobs through generative AI, as predicted by Goldman Sachs. Consequently, regulations should be implemented to avoid causing a severe economic downturn. Furthermore, to tackle this challenge, employee educational programs should focus on upskilling and reskilling.
2. Biased AI Systems
Human biases regarding gender, race, or color can unconsciously affect the data that is used to train AI systems, leading to biased outcomes. This bias can result in discrimination in various fields, such as job recruitment or law enforcement. For example, a biased AI system in job recruitment may reject resumes of individuals from certain ethnic groups, creating unfairness in the job market. Similarly, biased predictive policing can unfairly target particular neighborhoods or demographic groups.
It is important to develop a thorough data strategy that specifically considers the risks associated with AI, particularly the issue of bias. Regular evaluations and audits should be conducted on AI systems to ensure their fairness.
3. The code language used for the output of safety-critical AI applications is EN-US.
Examples of safety-critical AI applications include autonomous vehicles, medical diagnosis and treatment, aviation systems, and nuclear power plant control. Due to the potential severe consequences for human life and the environment, developers should exercise caution when creating these AI systems, even minor errors could prove disastrous.
The crashes of the two Boeing 737 MAX planes in October 2018 and March 2019, which resulted in the tragic deaths of 346 people, were partly caused by the malfunctioning of the MCAS AI software.
What actions should we take to deal with the possible difficulties that AI systems might bring about?
Responsible AI (RAI) refers to a set of principles and practices that provide guidance for the development and deployment of Artificial Intelligence (AI) systems. It aims to ensure that AI systems are fair, accountable, transparent, secure, and adhere to legal regulations and social norms. With the rapid development of AI systems, implementing RAI can be a complex process.
Big tech companies have developed RAI frameworks to help organizations with designing ethical AI systems. These frameworks include Microsoft's Responsible AI Standard, Google's AI Principles, and IBM's Trusted AI framework.
Organizations can create responsible AI solutions that benefit society as a whole by using these frameworks as a guide.
How can I ensure regulatory compliance when using artificial intelligence?
“AI-based organizations and labs must adhere to a set of regulations to ensure data security, privacy, and safety. These regulations include the General Data Protection Regulation (GDPR), a framework by the European Union (EU); the California Consumer Privacy Act (CCPA), a state statute for privacy rights and consumer protection; and the Health Insurance Portability and Accountability Act (HIPAA), a U.S. statute.
The European Commission has set up regulations related to Artificial Intelligence (AI), including laws that protect patients' medical data, the EU AI Act, and the Ethics guidelines aimed at trustworthy AI. Failure to abide by these rules may lead to significant consequences.
A serious GDPR infringement, like illegally processing data, not obtaining proper consent, violating data subject rights, or transferring unprotected data to an international organization, can result in a fine of €20 million or 4% of your yearly profit.
Different countries have created regional and local laws to safeguard their citizens.
Present and Future of AI Development and Regulations
AI technology is rapidly advancing, but the regulations and governance frameworks surrounding it are not keeping up.
Tech leaders and AI developers are warning of the risks of unregulated AI. While AI can bring value to many sectors through research and development, it needs to be carefully regulated.