gapp awards gallery banner

Embrace AI but be aware of the data pitfalls for the unwary

By Brian Civin, Chief Sales and Marketing Officer at AfriGIS

Artificial intelligence (AI) has made impressive strides over the past five years, with ChatGPT’s mainstream success marking an inflection point. The internet is filled with images, videos, songs and articles created by generative AI. Algorithms in software and platforms we use every day—like Netflix, Apple or Google Maps and Uber—help to shape the media we consume, the products we buy and even the routes we take to work.

Out of the public eye, companies are putting AI to work for applications as diverse as detecting fraud, generating code and reaching customers with personalised marketing. The tech and automotive industries are well advanced in trialling autonomous vehicles that use machine learning and sophisticated algorithms to safely navigate the streets. And this is just the beginning.

As AI matures, we’re starting to see use cases where the technology stands in for humans and makes decisions on their behalf. A Snapchat influencer called Caryn Marjorie has created an AI voice bot version of herself to talk to her followers in real time. Meanwhile, Chinese tech company, NetDragon WebSoft, has “appointed” an AI bot named Tang Yu as its CEO.

The overlooked risks of accelerating AI adoption

These examples show that AI has come a long way and that there are some compelling use cases for it in nearly every industry and business function. Yet there is also a danger that companies may overlook the risks of AI as they accelerate their adoption over the next few years. Though AI can support decision-making and critical thinking, it can’t completely replace human agency and judgement.

As quickly as AI evolves and improves, it will never be perfect for the simple reason that it relies on algorithms and data that are fed to it by humans. Although AI systems can “learn”, they can’t completely overcome challenges such as incomplete or inaccurate data, or biased starting assumptions. This introduces a range of risks every company should be aware of as it ramps up use of AI to automate business processes and support decision-making.

Let’s consider some of the issues:

Unvalidated data sources—Most popular generative AI tools use the public internet as their primary source of data. One of the major challenges companies face as they adopt these tools is that they can’t easily validate the underlying quality or accuracy of the information the system uses to generate a piece of content. In many instances, they can’t choose which data they’d prefer to use to feed the system. 

Manipulated data—Many AI systems work in a similar way to search engines in that the popularity of a search term or source may determine whether it’s included in an answer they generate. At its best, this should help to increase the reliability and objectivity of the content. But the downside is that bad actors could manipulate data for purposes such as fraud or to influence public opinion.

Breach of copyright law and privacy regulations—Responses generated by an AI system aren’t original—they’re composites of content that already exists in the public domain or in a company’s records. It’s important to tread carefully around the risks of making unauthorised use of copyrighted intellectual property or personally identifiable information.

Potential loss of control of your own data—In some cases, organisations will need to feed a commercial AI tool with their own data to generate an organisation specific report or response. Companies must read the small print since proprietary data they submit often becomes part of the system’s public domain content. In other words, other companies including competitors could benefit from information they have gathered at great cost and effort.

Undifferentiated business outcomes—If every business starts using the same AI systems fed on similar datasets, they risk making similar business decisions and generating similar products, services and content. AI can surface some compelling insights, but it requires a combination of unique data and human judgement to unlock its value potential.

Trust your own data, people and partners

The upshot is that companies can’t automate their critical thinking or outsource data governance. Forward-thinking organisations will scope the risks that low-quality and inaccurate data poses to data-driven decision-making and AI processes in their businesses. This exercise should coordinate legal, business and technical skills, considering the veracity of data from multiple perspectives.

As they roll out AI systems, companies should set out with clear standards about which data sources they will use to fuel AI systems, what the minimum requirements are for trusting data, who will control that data and who may use the data. It’s important to know where and how the data was collected if one is to avoid using biased, incomplete or otherwise inaccurate datasets.

Companies should generate and use their own in-house data or partner with a trusted entity to access accurate, reliable data to fuel AI algorithms. It’s also preferable to work in a closed environment where only the business and its close partners can access and work with the data. Organisations should put clear policies in place about how employees can use AI and how AI decisions should be explained and validated to further reduce risks.

Conclusion

While AI has made remarkable progress in recent years, it is essential for companies to be aware of the potential risks associated with its adoption. Trusting one’s own data, people and partners becomes crucial, along with establishing clear standards, data governance and policies to mitigate risks. Striking the right balance between AI’s capabilities and human judgment is key to unlocking business value in a responsible manner.

Previous Article
Next Article

NEWS