Skip to main content

The top 5 AI challenges (and how to overcome them)

The top 5 AI challenges (and how to overcome them)

Artificial intelligence (AI) has dominated the headlines since late 2022 with the launch of ChatGPT. Now, more than a year later, it's no longer just a buzzword but one of the most widely sought-after tools across industries. Yet it remains somewhat of an enigma, as significant numbers of business leaders aren't certain of its utility and worry about its implications. Indeed, though many industry experts feel AI is poised to improve processes and productivity, it does present several key challenges. 

At CareerBuilder, as an integral element of our research in AI hiring trends, we've examined these challenges and the potential solutions that decision-makers can either implement or promote to overcome them.

Misinformation

In a 2023 Forbes Advisor survey about the potential impacts of AI technology, an overwhelming 76% of respondents reported that one of their primary AI concerns was misinformation generated by tools such as ChatGPT, Google Bard, and Bing Chat. 

The unreliability of AI information — at this point at least — reflects how the technology works. AI-generated content relies on existing content found on the internet, and the internet is a varied ecosystem with tons of misinterpretations, incidental falsehoods, and outright lies. When AI churns out misinformation, it's regurgitating the misinformation used to train it in the first place.

The solution to AI misinformation won't be easy. It involves industry leaders highlighting the inherent shortcomings of data accuracy and promoting media literacy so that users can more intelligently approach internet content through a critical thinking lens. Algorithmic improvements may also be integral to combating AI misinformation, as might government regulation.

"AI content relies on existing content found on the internet, and the internet is a varied ecosystem with tons of misinterpretations, incidental falsehoods, and outright lies. When AI churns out misinformation, it's regurgitating the misinformation used to train it in the first place."

Data poisoning

Data poisoning is the act of manipulating the dataset on which an AI model is trained. The idea is to ensure that an AI system builds itself from a faulty foundation, leading to inaccurate decision-making and predictions. While data poisoning can be ethical (to prevent AI models from scraping artists' intellectual property, for example), it can also disrupt AI-generated outcomes in areas where data accuracy is critical, such as healthcare and business. 

Fortunately, solutions already exist for preventing or mitigating data poisoning, including:

  • Data validation and sanitation: Detecting and removing suspicious data before training starts
  • Model auditing: Regular monitoring of AI models to detect anomalous behaviors, which may arise from poisoned data
  • Data diversity: Using diverse data sources to weaken the overall effect of data poisoning
  • Data source tracking: The practice of keeping a tractable and transparent record of one's data sources — helpful for tracing the origins of a poisoned dataset

It's just a matter of AI users adhering to these best practices to prevent their AI tools from working off bad datasets.

Job loss

The potential for job loss is an even bigger worry among consumers, with 77% of Forbes Advisor survey respondents expressing concern that AI will affect their job security. It's less a question of whether AI can effectively perform people's jobs and more about what decision-makers believe. If an employer thinks AI can replace a large portion of their workforce with no detriment to quality, then workers will be out of jobs regardless of how well or poorly the AI performs.

The bright side is that business decision-makers wield much of the power here. How they use AI, if they use it at all, is their choice. People generally agree that AI best functions as a support, not a replacement, for human work. Employers who agree stand in a prime position to set a positive example for their respective industries by innovating complementary uses for the technology. Alternatively, they can implement assuaging initiatives like upskilling, reskilling, and job transition programs so that AI and human workforces can coexist.

The "black box" problem

The "black box" problem refers to the lack of transparency into how AI algorithms operate, reach conclusions, and make predictions. To illustrate, imagine that, to guide decisions, your organization has funded a study into consumer motivations. When the research comes back, it lacks sources, so you have no way to verify whether the findings are valid. Not only would you take an enormous gamble by basing decisions on what is essentially your faith in the findings, but you'd also make it impossible to identify and fix problems that lead to unwanted outcomes. 

Not to mention, the "black box" problem also inspires distrust in AI technology and the organizations that use it. Distrust can quickly lead to resistance from both the consumer and the workforce aspects, which is bad for the bottom line.

Right now, techniques do exist for making explainable AI (XAI), or AI in which the decision-making process is clearer and more understandable to humans. These techniques include:

  • Model visualization: Using visualization techniques to illustrate how an AI model processes data and uses it to make decisions
  • Feature importance analysis: Monitoring and analyzing the specific features that inform AI decision-making, providing insights into the underlying processes at play
  • Natural language explanation: A description of how an AI model reached a decision, allowing users to track and understand processes
  • Model distillation: Using a simpler model to mimic a complex model, providing an easier channel for understanding the complex model

However, AI developers may need a push from AI users to devote their resources to pursuing XAI techniques. If businesses continue to be a large market base for AI products, the solution is to influence AI developers through the market itself — that is, supporting companies that promote XAI, thereby encouraging competitors to get on board. 

Data privacy

Data privacy is a key concern because AI models are trained on huge volumes of personal data, including:

  • Names
  • Home addresses
  • Financial records
  • Medical records
  • Social Security numbers
  • Likenesses

On the one hand, the privacy question exacerbates the issue of transparency, as it's unclear exactly how AI models use such data and who can access it. There are also the risks of breaches and unauthorized access, which could put countless people's personal information at risk of exploitation. On a more personal level, bad-faith actors can use AI to create false profiles, manipulate likenesses, and even identify strangers using AI-powered facial recognition, endangering not only people's reputations but also their physical safety.  

An effective solution relies on AI developers prioritizing privacy and data security. They must adhere to privacy by design principles, by which they ingrain privacy considerations into the very fabric of their AI models, as well as to regulatory frameworks such as the General Data Protection Regulation and the California Consumer Privacy Act, which set strict standards concerning data protection.

As for the businesses using AI, they too can take measures to preserve privacy even as they use their AI tools to gather insights from user data, such as data anonymization (the act of erasing or encrypting identifying information, making the data untraceable to a specific person) and zero-trust security (a philosophy of trusting no entity outside of the organization, including third-party AI tools and services, thus requiring constant vetting of AI products against internal privacy and security policies).

AI seems to be on a trajectory toward broad proliferation across industries, so it's in everyone's interest to understand the risks and devise solutions. In human resources, for example, decision-makers already use AI tools for a wide array of applications, but the consensus is to center its use on supplementation, not supplantation. Narrowing the technology's scope is one way to mitigate the impacts of the above-mentioned challenges. Download CareerBuilder's "AI in Hiring: 2024 Trends, Insights & Predictions" for additional information.

Also, if you're interested in incorporating AI in your hiring process, consider using CareerBuilder's AI tools to optimize your job descriptions and source your candidates. We were one of the first hiring solutions to use AI and are proud to use it to support both sides of the hiring dynamic. 

More tips about technology in the workplace:

Before you can implement new technology in the workplace, you have to stay current on the tech trends shaping industries at the moment.

Communication is a key component of the candidate journey. With a recruiting chatbot, you can automate your communication to improve the journey for top talent.

Although incorporating new technology may introduce skills gaps in your workforce, you can fill in those gaps by upskilling your employees.

Fake content