Our Trust in AI Continues to Decline – AI in 60 Seconds

Sep 4, 2024 | AI in 60 Seconds, Our Thoughts

Few companies understand and measure the ROI of their AI investment

  • When employees choose their AI tool, use cases are more diverse. We are tracking over 130 use case scenarios across 17 industries and 10,000 AI Solutions.
  • Enterprise deployments of ChatGPT and Microsoft Copilot show a different story, as per August data from 4,800 prompts created with Copilot Ada:
    • 80% of usage concentrates on ~20 tasks. Tasks do vary per industry.
    • 5 use cases drive 67% of all interactions.
    • Content summarization, personalized outreach, and data analysis consistently top the charts.
  • Dozens of subscribers reached out with comments confirming our “productivity leak” observations from our Aug 7 newsletter. Almost all confirm high individual ROI, but 60% report challenges justifying ROI on corporate GenAI deployments. We’re preparing a report on Productivity Leaks and their impact in these early days of the GenAI revolution.

AI’s Trust Meltdown? we’re repeating Online Privacy’s mistakes

  • 82% of leaders expressed concerns about AI data handling and security, which is up from our previous reports.
  • 40% of organizations report at least one issue related to inaccuracies in AI responses. In our research, prototypes, and work with AI innovators, we have seen a dramatic reduction in so-called “hallucinations” by combining RAG with guided user experiences. Our AI-powered compasses and Private Copilots are examples in action.
  • 60% of CTOs and CDOs report strong or very strong concerns with data handling and AI training practices from leading enterprise generative AI solutions, including this recent announcement from MicrosoftThanks to more robust privacy frameworks, the announced changes do not affect (yet) users in the European Economic Area (EEA).
  • Private Agents Trend: In 100+ employee organizations, private AI agent adoption outpaces general-purpose ones. Why? Better data protection and control over AI training data.
  • 80% of leaders in private and 87% in Nonprofit organizations said their level of trust in an AI provider decreased after reading their AI disclosures. AI disclosures that mimic the obscure language of privacy policies and grant themselves the right to use user’s data via auto opt-in are creating trust issues.

Simplicity is crucial to driving trust; our research shows it increases by 60% the likelihood of buying and using! See details at AI Transparency.

We lack AI skills, but security-related skills need urgent attention

  • Over 25,000 individuals have completed our Digital Skills Assessments; these are the bottom 3 out of 20 dimensions assessed:
    1. How skilled are you in protecting access to digital devices and content, including online services and applications?: 28/100
    2. How skilled are you in protecting personal data and privacy and following data regulations?: 26/100
    3. How easy is it to decide if information or data is reliable, accurate, and useful24/100.

Training is ramping up

  • 28% of private firms and 14% of Nonprofits offer formal GenAI training programs and resources.
  • Traditional training methods (videos, webinars) struggle with <20% engagement.
  • Success stories: Hands-on, use-case approaches show 2-4x better results. Winning strategies: Internal ambassador programs, Prompt Engineering Cheat Sheets, AI tutors like Copilot Ada.

Resources

Thanks to your referrals, we’ve surpassed 1,000 subscribers in six weeks! 🚀 Please forward this newsletter to colleagues or share https://ai4sp.org/60 in your social network channels. Together, we’re building a community inspired to create, use, and support AI that Works for All.

Luis J. Salazar

Founder | AI4SP

Sources:

Our insights are based on data from over 130,000 individuals and organizations who used our AI-powered tools, participated in our panels and research sessions, or attended our workshops and keynotes.