The White House Takes Its First Step Towards Generative AI: What The Guidelines Mean for Startups, Investors, and Brands

By Greg Kahn
Emerging Tech Exchange
Founder & CEO

Published on November 8, 2023

Back in May, an article on the ability of generative artificial intelligence to fool banks and even relatives by copying a person’s voice and facial features set off alarms about the excitement over the rapid rise of ChatGPT and the rush of investment into startups in the space.

WSJ Personal Tech Columnist Joanna Stern tested gen AI “clones” of her likeness, and sparked worries over “virtual mini-mes” taking over our lives.

She explored AI tech from two companies: Synthesia, which lets users create artificially intelligent avatars from recorded video and audio, and ElevenLabs, an AI speech-software developer.

When Stern's Synthesia avatar called her actual sister, her sibling wasn’t quite convinced  — the virtual Stern didn’t pause for breaths. 

But when the “fake Stern” called Chase credit card’s voice biometric system, it was completely fooled.

It was another troubling sign of how this truly transformative technology has outpaced efforts to not only control it, but to even understand and develop plans to manage it.

Fast forward to last week. That’s when the White House took a big step by issuing a comprehensive executive order outlining guidelines and principles for gen AI programs. (The Biden Administration gen AI fact sheet is well worth examining.)

These guidelines are just that: recommendations for platforms intended to reduce exploitative and dangerous use cases.  While enforcement of the Biden Administration’s rules are unclear, they do present far-reaching implications for startups, investors, and brands involved in AI-related ventures.  Here’s the breakdown:

Startups: On the positive side, the order emphasizes innovation and competition, which can pave the way for startups to thrive in a regulated but supportive environment. 

The push for advancing U.S. leadership in AI technologies is encouraging for those seeking to establish a foothold in this rapidly evolving industry. Furthermore, the call for privacy-preserving techniques aligns with startups aiming to create AI solutions that respect user data.

However, startups will need to pay attention to the standards of AI safety, security, and content authentication. Complying with these standards can be resource-intensive and may require a shift in development practices. 

At the same time, the demand for safety test results from large AI model developers, such as OpenAI and Meta, could stimulate startups to prioritize safety from the outset.

Investors: The White House hopes the guidelines will bring a degree of predictability and stability to the AI investment landscape. The emphasis on safety and security standards provides a framework for assessing the risk associated with AI ventures, helping investors make more informed, practical decisions. Moreover, the focus on equity and civil rights aligns with the growing trend of ethical and responsible AI investment, potentially attracting a broader range of investors.

The  potential for new opportunities emerging from the guidelines. With increased government support for AI research and development, there may be avenues for strategic investments in startups and organizations focused on privacy-preserving technologies, safety assessment, and content authentication.

Brands: Brands that employ or are exploring gen AI in their operations will face several notable changes due to the White House's guidelines. The call for data privacy regulations is a pivotal consideration for companies handling user data. Brands will need to adapt to evolving data protection requirements, ensuring they are compliant with these new regulations while maintaining the trust of their customers.

Additionally, the focus on preventing “algorithmic discrimination” has direct implications for brands that deploy AI systems for things like customer profiling. Ensuring fairness in AI systems becomes a paramount concern, requiring companies to review and potentially adjust their AI-driven processes to avoid discrimination.

The first reaction to the White House’s attempt to put guardrails around gen AI’s power has been largely dismissive. The guidelines don’t seem to have any teeth to punish bad actors. Still others have argued that the guidelines themselves are inevitably out of date, considering how fast the pace of change is.

But I would view these guidelines as just an initial step for policymakers and regulators and business groups to get a handle on the future of AI. At over 100 pages, Biden’s executive order is impressive in its breadth and depth. Yes, it’s primarily a blueprint for how the U.S. government would address the next wave of advances as well as the role of AI in implementing cyberattacks and even deploying bioweaponry.

While the AI guidelines do cover a lot of ground, one area I’d like to see addressed is the issue of transparency in further AI development. There’s a lot of open-source AI programming that’s widely available. 

That’s a great thing. But the potential for this information to fall into the wrong hands is something that even the biggest proponents who shout “information wants to be free” should be a little worried about.

As I noted in the previous ETX Newsletter, I anticipate leaders at gatherings like January’s World Economic Forum will outline clearer guidelines to shape the use and curtail the misuse of AI. Realistically, the best anyone can do right now is attempt to catch up. 

Greg Kahn 

Emerging Tech Exchange
Founder & CEO

Salt Sound Marketing

Salt Sound connects people to products + services through a holistic approach to brand marketing. We develop, design and execute in digital and experiential channels.

https://saltsoundmarketing.com
Previous
Previous

Defining Content In the AI Age 🧑🏾‍💻

Next
Next

OpenAI’s DevDay Puts Even Greater Power In AI Users’ Hands, Voices, And Minds