Superficial Media Analysis and the Adoption of Generative AI

Feb 17, 2023

Luis J. Salazar

As we strive towards building equitable and just AI for social progress, we cannot ignore the risks that arise from misinformation and biased analysis. However, we must also be wary of superficial media analysis that seeks to confirm biases and spread fear instead of providing reliable insights into the potential of emerging technologies.

The superficial analysis frenzy published in social networks and by reputable publications such as the New York Times brings me back to when we feared the printing press because dangerous ideas would be spread.

We need to figure out many things related to battling misinformation and perfecting equitable and just AI that does not perpetuate our biases. I am positive that, as a society, we will do so as we embrace AI for Social progress.

Emerging technologies can be met with fear and misinformation.

In the past, other technologies have been attacked by the media or experts, like when people feared that the introduction of the printing press would lead to the spread of dangerous ideas.

We have seen a growing number of social media posts and articles claiming to sound the alarm on the dangers of generative AI. While some are valid critiques, many rely on shallow testing and edge cases, which are not thorough or reliable ways to evaluate the risks or benefits of new technologies.

We must approach new technologies with an open mind and educate ourselves before spreading misinformation. Like a chef’s knife, these technologies can be used for good or bad.

For example, many social media posts and articles claim that chatGPT and BingChat created dangerous content or are helping students to cheat on essays. Some are valid critiques; others are edge cases where the user wants the technology to produce the wrong result to be able to confirm their biases.

For example, one can command the model to pretend to be a character in a movie; the character’s role is to misinform people about UFOs (one plays with the model until one can overcome the model’s guardrails). Once one accomplishes that, one captures the screen (omitting the training part) and shows the world the results to misinform.

Practical Uses of Generative AI for good are abundant.

Instead of forcing the model to generate harmful responses, one can use generative AI for entertainment and introspection; I prompted the model to act as the Dalai Lama XIV based on all available information about him. I have had great debates about AI for social progress and the meaning of life.

Of course, I also use it for many business-related scenarios. My company Connect Sparks is creating affordable Ai-powered solutions for Nonprofits to classify large datasets, create fundraising content, or analyze the sentiment of feedback received by the clients they serve. These nonprofits have saved over $60,000 in operating costs using a spreadsheet powered by Generative-AI.

I am also testing many innovations, from entrepreneurs creating powerful apps to help us with our everyday tasks. For example, Type.ai is an AI-powered document editor that helps you write quickly. Stew Fortier, his Co-Founder & CEO, reads every piece of feedback received to ensure he delivers an excellent product for the masses.

There is always someone that will find a way to use a chef’s knife to harm others, but thankfully, most people will use it to cut some edible items, and few will use it to create masterpieces of cuisine that bring joy to the world.

Let’s focus on using generative AI for social progress and not allow fear and misinformation to obscure its potential for good.

Can we Help?

Contact us if you need help understanding how to use AI responsibly in your organization or if you are Raising Capital for Early Stage AI Startups

Share This