7 prerequisites to think about before embarking on your journey to AI

Everyone is admiring the butterfly called AI, which was born out of its cocoon. Large language models are refined, learned, and communicate proficiently in a variety of languages ​​and ways. Almost everything humans have dreamed of and expressed in language is produced in a more sinister and better way. Scientists are teaching AI to compose music, explore space and (worrisomely) even repetitive tasks traditionally performed by paid workers.

Meanwhile, beneath the surface, serious problems shine through. It may be that humans are still in the stage of awe and have not noticed it yet. Sooner or later, we will be grappling with the challenges and ethical dilemmas created by AI. We’ve compiled seven AI problems that you won’t be able to ignore once the surprise you’re feeling starts to wear off.

ⓒ Getty Images Bank

lack of resources

Most large-scale AI models perform massive parallel computations with the help of specialized chips known as GPUs or TPUs. Cryptocurrency miners, video conferencing participants, and gamers also use these operations. Everyone wants high-performance hardware, and in recent years, demand has exceeded supply and hardware prices have skyrocketed. Even more troubling is that many users are using the cloud, so sometimes the cloud can’t keep up with the expansion. Building an AI model with all available capabilities would require an enormous amount of hardware and would not be cheap.

Hardware is a necessary but scarce resource for successful AI deployment. Even huge AI-related jobs require a lot of electricity, and not all countries supply enough. In the face of geographic conflict and available renewable resources, simply obtaining sufficient electricity at a predictable price is a challenge. Some cloud providers are raising rates in certain regions to get more revenue thanks to geographic disputes.

AI Ethics

There are many topics or controversies that one should avoid in the context of a particular time and place, such as a holiday dinner or the workplace. However, AI must learn to handle specific problems with care in all contexts. Some large language models are programmed to bypass or refuse to answer input questions. However, there are many users who want to keep touching the nose hairs of sleeping lions. If such a user noticed that the AI ​​was avoiding questions with subtle intentions—questions that provoke racial or gender bias—they would immediately try to find a way to get around the safety belt.

Biased data and insufficient data are problems that need to be corrected over time. However, in the end, the possibility of misuse is very high. While letting AI churn out hate speech is certainly bad, scenarios will become increasingly complex as AI is used to explore the moral implications of real-life important decisions.

global labor inequality

Many AI projects rely on human feedback to drive learning. Sometimes large-scale projects require that many people to build the training set and adapt the behavior of the model as it grows. The quantity required in many projects is economically feasible only if the trainers are paid low wages in less developed countries. There is much debate about what is fair and just, but no one has found an economically feasible solution for large-scale projects. Just as the gem mining industry avoids high-intensity and risky mining operations, the AI ​​industry does not have a simple solution to labor costs.

The vicious cycle of bad feedback

False information in the form of fake news and fake reviews has been created and raised as a problem for various reasons ranging from political reasons to commercial purposes. In the case of reviews, companies create fake reviews praising the product and leave fake reviews that disparage competing products. Algorithms that block bad actors are, in fact, surprisingly complex and require significant maintenance. It seems that free rice is not everywhere.

But imagine what would happen if AI were misused to create disinformation. First of all, the amount of disinformation will increase enormously. Another example is the increased likelihood that other AIs will learn false information and re-enter it into the training corpus. Disinformation word of mouth is already polluting social networks. How does the vicious cycle of bad feedback amplify and then destroy knowledge? Could there be a finely tuned training set of texts from the past before the singularity?

legal system

AI learns everything it knows by copying vast corpora and images. Most of the creators of the data are unaware that their work will be fed into AI models that will one day be worth billions of dollars. What will happen when humans give jobs to AI? Suddenly, you will have more time to hire a lawyer while raising controversy over licensing, copyright, and plagiarism. You can joke about letting AI learn relevant precedents, but AI judges who can make decisions in milliseconds are far more to be feared than human judges who spend years reviewing cases. Depending on humanity and the legal system, it can take decades before a verdict comes out.

Let’s take another scenario with a similar context. It is harmless, if unpleasant, for AI to make mistakes while explaining historical events or pop culture past. However, if the same AI were to say something disparaging about a living person, it could actually amount to defamation. Therefore, it is quite conceivable to create false information by patching together various sentences about a living person. Imagine that the slandered person has the means to hire a lawyer and get revenge. Is AI itself to blame? Or should the companies that own AI be held accountable? Perhaps it is only lawyers and law firms that benefit.

fatalities and incidents

As far as I know, there are no clear examples of AI as evil as the villains in sci-fi movies. Self-driving cars and factory machines have been known to make mistakes, but for now, it’s not the work of any evil AI. A far more plausible explanation (which is, of course, very tragic) is that the mistake is made by a foolish person who eats breakfast and texts while driving, rather than a self-driving car causing an accident. So far, humans don’t know how to screen for serious injuries or deaths caused by AI.

Clever users are already beginning to understand how AI tends to make mistakes. AI is great at correctly understanding ambiguous information, but it can stutter even at the simple task of counting from 1 to 5. But what if something more serious than a simple miscalculation arises? No matter how much money you want, you can conclude that AI can’t do at least some specific tasks.

great legacy

Humans tend to imagine that animals or AI will think in the same way as humans. This thought can be frightening, as humans are often disappointing and downright dangerous. The real problem is that AI has other forms of intelligence that humans do not yet fully understand. As a species, humans must learn more about AI’s unique strengths and weaknesses.

In the meantime, the field of artificial intelligence has been inflated and overstuffed so much that it cannot live up to the ideal by human optimism. It is not their fault that scientists fail to meet human expectations. It is difficult to hold companies accountable for contributing to the capitalization of AI. It is entirely human fault that hope has outpaced reality. If humans expect too much, the field of AI is doomed to disappointing results.
[email protected]


Source: ITWorld Korea by www.itworld.co.kr.

*The article has been translated based on the content of ITWorld Korea by www.itworld.co.kr. If there is any problem regarding the content, copyright, please leave a report below the article. We will try to process as quickly as possible to protect the rights of the author. Thank you very much!

*We just want readers to access information more quickly and easily with other multilingual content, instead of information only available in a certain language.

*We always respect the copyright of the content of the author and always include the original link of the source article.If the author disagrees, just leave the report below the article, the article will be edited or deleted at the request of the author. Thanks very much! Best regards!