Virtue Signalling or Value Creation? Unmasking Big Tech's New AI Safety Board

Four big tech giants - Google, Microsoft, OpenAI, and Anthropic - have joined forces to create the Frontier Model Forum, a new artificial intelligence (AI) industry body for regulating the development of advanced AI. 

According to a CNN article, the Frontier Model Forum aims to work with policymakers and researchers to regulate the development of advanced AI. They'll discuss developing best practices for AI safety, promoting research into AI risks, and sharing information with governments and civil society.

And I'm sure this has nothing to do with U.S. and European Union lawmakers actively working on binding regulations for these exact companies-right? 

Or, maybe big tech is trying to beat regulators at their own game by agreeing on the ground rules before more players join the game. 

Is big tech's new AI safety board a genuine step forward or just a clever PR stunt?

Now, don't get me wrong. I'm all for self-regulation. But this? This is just virtue signaling. Big tech says, "Look at us...we're playing nice with the rest of the world." But let's cut through the PR spin. They need to govern themselves, or Congress will do it for them-and who wants that?

Plus, the Frontier Model Forum, Amazon, and Meta have pledged to the Biden administration to subject their AI systems to third-party testing before public release and to label AI-generated content clearly. So, not only are they trying to set the rules, but they're also trying to show us they can play by them.

As AI investors, don't be swayed by big tech's virtue signaling. Instead, focus on the real advancements in AI technology. Look for companies genuinely pushing the boundaries of what's possible, not just those trying to make a good impression.

Stay liquid, 

Nick Black

Chief Digital Asset Strategist, American Institute for Crypto Investors