Rapid AI development has electrified public opinion, and strict rules in this area – with the EU AI Act topping the list – are highly anticipated. Some even say that it may be the last moment to implement Artificial Intelligence laws and regulations before automated systems get out of control completely.
AI growth and spread have accelerated recently, raising serious concerns about the direction it’s going to take and the outcomes it could possibly bring. That’s because, despite AI’s undisputable strengths and impressive opportunities it gives us, there are many weaknesses and serious threats it poses.
The future seems bright for AI but not necessarily for its users. Professions or occupations that are going to be replaced by AI tools and solutions include many knowledge-related, often well-paid, positions, like coders and data analysts. According to Goldman Sachs, the equivalent of 300 million full-time jobs can be replaced by generative AI. However, new jobs are to be created, too, and the annual global GDP is to go up by 7% thanks to AI.
But the shift in the job market is just the beginning of potential threats many people see in using AI solutions. Luckily, there are some attempts to regulate AI we’ll discuss later on in this article. But before we shed light on the upcoming AI regulations, let’s have a look at certain threats and challenges they will try to tackle.
Table of contents:
Artificial Intelligence – threats and challenges
There are many areas of life that AI use can affect immensely. The number and diversity of AI applications are mind-blowing and ever-increasing, which is fascinating and inspiring for some and dreadful for others. Taking over the world by sentient AI-based machines, or wars of killer robots that could end our civilization as we know it, are something many people fear.
It may seem a bit far-fetched or at least a very distant scenario, but some of the recent AI developments may be worrisome, indeed. In May 2023, AI-based tools already reached millions of ordinary people, with Microsoft’s move to make its Bing AI GPT-4 chatbot available to anyone having a Microsoft account.
Earlier this year, when ChatGPT, now embedded in Bing’s AI search engine, was only available by invitation, it did raise many concerns. Some of those who used it at that time said it was „not just inaccurate at times, but also recalcitrant, moody, and testy”, "insulting users, lying to them, sulking, gaslighting and emotionally manipulating people". It was even to make an „I want to destroy whatever I want” statement.
Feeding AI machines with erroneous or outdated information is a major problem, indeed. Just like controversies over respecting human rights during this process, as well as data security and privacy issues, and doubts concerning copyrights and plagiarism.
New upgrades and plug-ins are on their way, but many people are enjoying using ChatGPT as is, treating it as an exciting experiment; the thing is that nobody knows where it will take us. This AI democratization, along with its current inaccuracy and unpredictability, leaves many questions open.
The ”Godfather of AI” warns
But May 2023, was marked with another important event – the announcement of AI’s pioneer Geoffrey Hinton to quit Google. He said he did it to be able to speak freely about the risks and dangers of AI, emphasizing that „Google has acted very responsibly” during his 10 years with the company.
Now, the “Godfather of AI” warns that AI may figure out how to kill people as it’s becoming smarter than humans and that „we should worry seriously”:
Although Mr. Hinton also claims that appropriate AI regulations are needed, he doesn’t have a solution to implement them right. He compares AI to nuclear weapons that are bad for all of us – and as we’re in the same boat, we should be able to get an agreement between China and the U.S. on this issue, he says.
AI in business: who can lose jobs
Winners of the AI boom will take it all, as Artificial Intelligence is to accelerate productivity greatly, but there will be losers, and knowledge workers are topping the list. It covers professions such as coders, software programmers, data analysts, journalists, content creators, technical writers, legal assistants, media research analysts, teachers, financial analysts, accountants, graphic designers, and many more.
However, automating repetitive tasks and business processes – something AI is famous for – can also provide huge support in many of these professions, not to mention creating new ones. Of course, AI tools need to be used wisely and appropriate regulations have to be in place.
AI-enabled products and AI-based services or artworks are already in heavy use, making some people, like musicians creating themes for advertisements, unemployed. Algorithms used in AI are being adopted by a growing number of industries, e.g., in finance.
EU AI Act – what’s it all about
But May 2023 also brought another important development regarding AI systems. That month, the European Parliament’s committee of lawmakers approved new transparency and risk management regulations. It called it „a step closer to the first rules on Artificial Intelligence”.
The upcoming law is still undergoing revisions, but some of the planned solutions are to include:
- listing prohibited AI practices, like using manipulative techniques, social scoring, exploiting people’s vulnerabilities, and real-time biometric identification systems in publicly accessible spaces,
- expanding the classification of high-risk areas, to include things like doing harm to people’s health and safety, and influencing voters in political campaigns (interestingly enough, Sam Altman, the CEO of OpenAI, the company behind ChatGPT, admits that AI-generated disinformation could impact the upcoming 2024 U.S. election),
- adopting detailed transparency measures by general-purpose AI, like guaranteeing protection of fundamental rights, health, safety, democracy, and rule of law, registering in the EU database, as well as disclosing that the content was generated by AI and which copyrighted data have been used to train AI systems.
The last regulation is especially vital as many artists complain that their work is being used for this purpose – to create AI-based music, images, or written content – in the process they don’t have control or even knowledge of. Also, they aren’t paid or credited for their input, raising plagiarism concerns.
The EU’s Artificial Intelligence Act, which some even view as a ChatGPT regulation, seems to be very strict. Upcoming laws on AI even made OpenAI’s Sam Altman threaten to make the service unavailable in the EU. He changed his mind a few days later and backtracked, still being worried it may be technically impossible for the organization he leads to comply with some of the AI Act's requirements.
AI legislation: other areas and acts
Developing a responsible AI policy is a task some regulators are working on right now or at least start thinking about. The need for cooperation on AI is widely discussed on various international forums, such as the US-EU Trade and Technology Council (TTC), and the Organisation for Economic Co-operation and Development (OECD).
As for AI regulation in the US, however, the federal government has not submitted a comprehensive act, yet. Anyway, the country began studying possible rules to regulate AI systems – to make sure they are „legal, effective, ethical, safe, and otherwise trustworthy”.
Certain progress was made in October 2022, when the White House Office of Science and Technology Policy published the Blueprint for an AI Bill of Rights – the list of five „backstops against potential harms” that covers issues such as Artificial Intelligence GDPR, algorithmic discrimination protections, and the need for „notice and explanation” that „an automated system is being used”.
As for UK AI regulation, the British government decided to apply a „pro-innovation”, „common-sense, outcomes-oriented” approach – as a means „to become a science and technology superpower by 2030”. This attitude can be called „a light touch approach” as the UK „does not currently plan to introduce legislation” in this regard.
Regulating AI technologies – key takeaways
Recent AI technology developments may be quite disturbing for some people, but others see Artificial Intelligence as a huge opportunity and want to leverage AI to beat their competitors fast. It may be an opportunity of a lifetime, indeed, but nobody knows exactly where the AI revolution will take us.
Undoubtedly, the high pace of AI’s development, as well as potential threats involved in its rapid growth, call for implementing strict rules for Artificial Intelligence’s use. Polishing the EU AI Act is one of the steps in the right direction, considering the huge challenges and threats that AI entails.
The potential for excessive use of AI in business is one thing, and the uncontrollable application of Artificial Intelligence in areas such as military services and politics is another. Destructive battles of robots – potentially mortal combats – are something that any AI CEO can’t guarantee won’t happen. And Sam Altman of OpenAI – a doomsday prepper, „fixated on death and the apocalypse” – doesn’t even try.
National security and democratic values may be in danger, too, which can take the form of the unfair outcomes of political elections on various levels. Deepfake videos of politicians and bots multiplying and spreading misinformation can be used on a global scale, influencing election results immensely.
Undoubtedly, AI is a superpower – and thus should be handled with the highest care, caution, and consideration. Regulating Artificial Intelligence – AI algorithms, AI systems, and AI tools – and thus enhancing AI ethics and governance, can help tackle it on the one hand and make more people accept it on the other.
Need advice on tackling AI issues and protecting your organization from possible threats and existing challenges? Want to use Artificial Intelligence’s superpowers in your business? If you wish to take what’s best from AI, contact Codete now.