Nova is building guardrails for generative AI content to protect brand integrity

Photo of author

By Webdesk


As brands incorporate generative AI into their creative workflows to generate new content related to the business, they must proceed with caution to ensure that the new material adheres to the company’s style and brand guidelines.

Nova is an early-stage startup building a suite of generative AI tools designed to protect brand integrity. Today, the company is announcing two new products to help brands monitor AI-generated content: BrandGuard and BrandGPT.

With BrandGuard you include your company’s brand guidelines and style guide, and with a set of templates Nova has created, it can check the content against those rules to make sure it’s in line, while you with BrandGPT ask questions about the brand’s ChatGPT style content rules

Rob May, founder and CEO of the company, which previously founded Backupify, a cloud backup startup acquired by Datto in 2014, acknowledged that companies wanted to take advantage of generative AI technology to create content faster, but they were still increasingly concerned about maintaining brand integrity, so he came up with the idea of ​​building a guardrail system to protect the brand from generative AI mishaps.

“We heard from several CMOs who were concerned about ‘how do I know if this AI-generated content is proprietary?’ So we built this architecture that we’re launching called BrandGuard, which is a really interesting set of models, along with BrandGPT, which acts as an interface on top of the models,” May told TechCrunch.

BrandGuard is, as it were, the back-end for this brand protection system. Nova built five models that look for things that are not right. They perform checks on brand safety, quality control, whether it is on brand, whether it is stylish and whether it is on campaign. It then assigns each piece a content score, and each company can decide the threshold for bringing in a human to review the content before publishing.

“If you have generative AI making things, you can now score that on a continuum. And then you can set thresholds, and if something is below, say, 85% of the mark, you can have the system flagged for a human to look at,” he said. Companies can decide which threshold they like.

BrandGPT is designed to work with third parties, such as an agency or a contractor, who can ask questions about the company’s brand guidelines to make sure they’re complying, May said. “We’re launching BrandGPT, which is meant to be an interface to all these brand-related security things that we do, and as people interact with brands, they can access the style guides and understand the brand better, whether they’re part of the company or not .

These two products are available in public beta starting today. The company launched last year and raised $2.4 million from Bee Ventures, Fyrfly Ventures and Argon Ventures



Source link

Share via
Copy link