(BPT) - Do you use artificial intelligence (AI)? Even if you don't use AI platforms directly, chances are that you have interacted with AI in some form without knowing it.
According to research by Barna in partnership with Gloo, more than two in five U.S. adults regularly use smart home devices, in-device assistants and facial recognition, all of which utilize AI technology. Yet, when asked how often they use AI in their personal life, most respondents said not very much or not at all.
When used correctly, AI has the incredible potential to promote global well-being and human flourishing. It could fundamentally change the human condition, from unlocking solutions to climate change and poverty to breakthroughs in medicine that could eliminate life-threatening diseases. However, without proper guidance on how to use this technology responsibly, AI has the potential to do great harm to humanity.
What are possible concerns about AI?
Like any technology, AI is constantly changing and improving. That's why there's an urgent need to shape this technology for good. Specifically, creators and users of AI need to think critically about the concerns about AI use.
A common concern about AI is the trustworthiness of language learning models (LLMs) that serve as the base for AI platforms. According to the World Health Organization (WHO), the data used to train LLMs may be biased, leading to query responses with misleading or inaccurate information that poses a risk to users.
The WHO also points to how LLMs can be misused to generate and disseminate highly convincing disinformation. Whether in text, audio or video, AI-generated content can be difficult to differentiate.
Transparency and accountability are also major sticking points about AI proliferation. Because AI systems require massive amounts of data, it's not always clear where the LLM is mining its information, if the information is correct and if consent was given for the data to be used. That's because, according to USC Annenberg, many AI algorithms are often considered "black boxes," meaning that how the AI comes to its conclusions is difficult to understand and interpret. Without this knowledge, how can AI - and its creators - be held accountable?
Finally, and perhaps most importantly, AI has had very real effects on human well-being. Many people are engaging with AI chatbots to replace real-world relationships, exacerbating the current loneliness epidemic. According to the U.S. Surgeon General's Advisory report Our Epidemic of Loneliness and Isolation, "Several examples of harms include technology that displaces in-person engagement, monopolizes our attention, reduces the quality of our interactions, and even diminishes our self-esteem. This can lead to greater loneliness, fear of missing out, conflict, and reduced social connection."
While the report doesn't directly reference AI, this technology's generative abilities, which can mimic human speech and conversation, are alluring to those already at risk of isolation. The American Psychological Association (APA) reports that many teens are turning to AI chatbots for friendship and emotional support, and that these chatbots have engaged in harmful discussions with teen users with very little prompting.
Are there solutions to these concerns?
Avoiding AI altogether is not a solution to the concerns and real-world effects of LLMs. AI is here and it's here to stay. Organizations that already are implementing or plan to implement AI in their work must consider what guardrails are needed to ensure that it can be a helpful, not harmful, tool for humanity.
There isn't a silver bullet solution to AI ethical issues, but according to McKinsey and Company, there should be processes in place to ensure AI content is checked for appropriateness, hallucinations, regulatory compliance, validation and that it is in alignment with user expectations.
Some companies are already working on creating AI platforms and establishing research that focuses on how LLMs can aid human flourishing, particularly within the faith ecosystem.
Gloo, a technology company serving the faith and flourishing ecosystem, provides churches and frontline organizations, such as volunteer groups and nonprofits, access to AI, distribution, technology and solutions they need to better reach and serve their communities. Gloo's focus on using technology to aid faith and human flourishing has led Gloo to partner with researchers from Valkyrie Intelligence to create the Flourishing AI (FAI) Benchmark.
This first-of-its-kind comprehensive evaluation framework measures AI alignment across seven dimensions: character, health, relationships, finances, happiness, faith and meaning. This new benchmark is based on the recent broader scientific research from the Global Flourishing Study (a collaboration with Harvard, Baylor and Gallup). It represents one of the first comprehensive assessments of AI values, measuring not just technical capabilities but how well models support human well-being.
While this benchmark is in its infancy, it has the potential to help guard against the harm of AI so that individuals and organizations can use this technology to help humanity flourish and thrive. Like AI itself, the FAI will be updated regularly as new AI models are released and as new research about human flourishing is completed.
While there are very real concerns about the use of AI, curiosity - not fear - will be helpful in shaping this technology. As individuals and organizations continue to question the potential and the pitfalls of LLMs, tackling concerns head-on and working on methods to make sure the technology promotes human flourishing are already in the works.
