Jennifer Laine Van Beek on AI, Why Brand Suitability Must Replace Brand Safety and The Confluence of Inference
Jennifer Laine Van Beek is a consultant for a range of companies in the advertising and AI space and has just launched Sprezi, an AI-powered hospitality company. Jennifer’s wide-ranging career took her from the embassies of Japan, South Korea, and Russia to the soybean pit at the Chicago Board of Trade even before she began traveling the globe producing and styling short films and commercials for LVMH and other prestige brands.
Seeing a gap in infotainment style B2B & B2C content, she then co-founded Cloud Productions, where she was responsible for building an LMS platform for leadership & personal growth content. The platform is currently licensed to several Fortune 500 companies and distributed to millions of subscribers. Jennifer chose to exit the company in 2020 and began working with Oxford Road, a podcast advertising platform, to create new AI-based tools that help brands assess the risk associated with content on over 40,000 shows in order to find the right—and wrong—places to advertise.
We talked with Jennifer about her remarkable career, her newest travel and dining venture, brand suitability, and why brands absolutely must have a clear set of values when buying advertising.
You’ve been really successful in technology roles recently, but that’s not where you started your career. Can you tell us a little bit about how you got started?
From the outside there isn’t exactly a throughline in my career, but it all kind of makes sense and works together. I studied international economics. I graduated early and took an internship with the State Department. I loved to travel, and from the outside looking in, I was sure this would be my career path. I spent time in Japan, South Korea, and Russia, and while I loved being overseas, I learned pretty quickly that this wasn’t the right path for me.
I came back and got a job in finance. I was always very mathematically minded, and this felt like a smart, responsible career decision. I started live trading at the Chicago Board of Trade. I did pretty well, in part because I was the only woman in my trading pit. It became a currency because success in that role can be dependent on the level of exposure you have in that environment. But being the only woman was also terrible; people would speak to me in ways that were just completely deplorable. It changed me. I became robotic and shut down conversations as quickly as possible. It bled into my personal life. When I went home for Thanksgiving, I had a hard time interacting with my family like a normal person. My dad noticed and had a really difficult conversation with me, helping me realize this was costing me more emotionally than it would ever be worth financially. He suggested I take a break and go on vacation. I resisted, of course, as I hadn’t really had a vacation in years, but I decided to go to Greece for a couple of weeks to recharge.
A few nights into my vacation, I met a man in a bar who said he was shooting a short film for Louis Vuitton the next day, and his stylist wasn’t going to make it. He thought, based on what I was wearing, that I might be able to step into this role. I was pretty sure it was bulls**t because most of the men I’d met in my professional career up until that point were completely full of it. But I said okay, and the next morning, I got a call sheet from LVMH with an outline of what they wanted. I dove into a project that was the most fun I’d ever had working on.
I fell in love with the work and started traveling with him and his team, shooting short-form content for different luxury brand campaigns. It’s crazy to say that we were shooting for social channels because it was 2011. Instagram was still pretty new, and people didn’t know how to use it; YouTube didn’t really have a fashion presence yet, and short-form content was not something many luxury brands were doing at the time. But we found our niche and started making some really impressive work, which I’m still proud of today.
Over time, my role evolved beyond styling into production and creative direction. I loved that job, and it taught me invaluable lessons and honed skills that I still use today. Still, it didn't exactly feel like a responsible career in the more traditional way, and I wanted to provide value beyond the visual aspect of the work.
Your next role is where you really leaped into technology. Can you tell us about that?
The husband of a good friend is a leading corporate psychologist. He works with CEOs around the world, and his clients wanted to share what they were learning from him with their teams. He hired me to create a series of videos. It was almost like a Master Class, though that company hadn’t yet been founded. We shot videos about leadership in which he teamed up with a subject matter expert to share specific stories or skills they’d learned through their careers. We called the style infotainment, and it was heavily influenced by the beautifully short, story-driven documentaries that were beginning to become popular across streaming services.
In order to have a platform that was visually appealing, delivered the video in sections, and provided a trackable curriculum for users and their L&D/HR departments, we had to custom build an LMS (Learning Management System). A lot of the companies he worked with began to license our series for their internal leadership development needs as either branded or white-labeled content. Once we knew we had produced something incredibly valuable to individuals looking to grow in their careers, we expanded beyond his client companies and eventually offered a B2C product as well.
Your next role was even more tech-heavy. Can you tell us what you do at Oxford Road?
Oxford Road is the leading independent podcast advertising agency. I met with the CEO and founder early in 2020, which, as we all know, was a contentious election year. He explained that a lot of the brands they worked with were running away from shows about news because they were afraid politics, political opinions, and infighting would reflect badly on their brand and alienate their customer base. He wanted to make sure podcasts that fell into the news and political genres didn’t get defunded because those shows can be very important to listeners and the health of our country, not to mention they can perform really well for clients.
He wanted to find a way for brands to have transparency into the type of content being discussed on shows at scale so they could make a decision about whether to align with a given podcaster or show. Essentially, he wanted to put nutrition labels or a risk evaluation on each host/show so that a brand could look at data and make a more objective decision.
It was an idea on a white piece of paper at the time. A problem he wanted me to solve. Honestly, I had no idea how to start solving it, but that's my favorite type of project. I started meeting with networks and podcast hosts and with brand advertisers to figure out what they needed. Keep in mind these are people who are spending upwards of $10 to $15 million in this space. Eventually, this process led me to Tamara Zubatiy, a woman who was just finishing her PhD at Georgia Tech. She was building a contextual risk tool for Twitter, which is not dissimilar to podcasting in that they are both mostly opinion-based. We started working together to understand how to apply risk profiles at scale.
The first thing we needed was better transcripts. There are some great transcript apps out now, but a few years ago, their accuracy rate wasn’t good enough to get consistently accurate data. Once we had our transcript tool, we set up RSS feeds for each show and host in that tool. Then, we could apply our tech layer to analyze the content of each episode and show.
“Brand safety suggests there’s a right and wrong, but suitability understands it’s more nuanced and personalized.”
How do you even start with that kind of analysis when there are so many podcasts out there on so many different topics?
We based our analysis on the GARM Framework created by the World Federation of Advertisers in 2019. GARM identified 11 key categories of content that could be risky for advertisers, such as drugs and alcohol, terrorism, and debated sensitive social issues. Almost all of politics fits into that last category. We trained our models to identify these buckets and then apply a contextualized risk rating around them.
Then, we had to get far more granular. We had to do things like get the models to know the difference between a shot in a movie, a shot in basketball, and somebody getting shot on the street. If you’re talking about sexual content, the models need to understand the difference between educational content and adult content or anything that is lewd or oversexualized. Politics was a hard one because brands have different thresholds for it, and those thresholds may be different for the left and right sides of the political spectrum. Recently, it’s also gotten harder to determine if something is true, false, or simply an opinion/hot take and if the audience even cares.
We created political filters and scales from the hyper-extreme right to the hyper-extreme left. A brand can say they don’t want to advertise on any political show, or they can fine-tune the filters for their comfort. They don’t have to choose equally on either side of the political spectrum. A brand could say, for example, I don’t want to be associated with anything right at all, but I’m okay with going to the center or the left. Or vice versa.
It was also important for us to measure the levels of attack, hostility, and hate in podcast content. That’s where we were seeing brands get into trouble, especially when they aligned with a host who shared aggressive viewpoints with their audience each week. We partnered with Seekr, a leading AI company that is laser-focused on transparency. Together, we mapped out the Civility Score, which tracks context, tone, and presence of attack across many levels of severity. This gives brands the opportunity to align with news, political, and comedic content while choosing how comfortable they are with venomous or hateful viewpoints. For example, brands might legitimately not care if a host is supporting Trump or Biden but not want to be associated with a show that is personally attacking politicians or attacking a protected class like the LGBTQ community.
In developing the Civility Score, we started with 20 shows and chose hosts and episodes we felt were being misrepresented or unnecessarily pinged by GARM. Once our models and outputs were in a place that we felt accurately represented the data, we expanded to over 40,000 shows. Now, over $200 million of podcast advertising revenue is being filtered through these tools, allowing brands to set their filters across each category: Civility, GARM, and Politics, individually.
Did you have to listen to a lot of content while training your models and developing your civility scale?
Yes. We custom-built the scales across all categories and had to listen to hours and hours of podcasts. We’re a very divided country right now, and I have to tell you there were days when it was really dark and challenging. There is so much negative and truly hateful content that, at times, I just had to shut my computer and go for a walk in order to try to purge that language out of my head. You’re not just reading it; you’re listening, and the hate really comes through when you listen.
At one point, we were designing the civility scale, and I was also working on a profanity score, which involved listening to some of the worst content out there in order to establish a baseline or floor. After a few weeks, I deleted my email off my phone and went on a vacation to reset. I just couldn’t do it anymore. We realized it was too much to ask of anyone or even just a small group to do. Now we share the responsibility and tell the team not to spend more than 30 minutes a day on the extreme content as it can become emotionally taxing.
“Safety is table stakes at this point; no brand should be sponsoring terrorist content or content below the floor. We know that now, we need people to look deeper and understand what’s right for their brand and their consumers.”
This is all about brand safety, right?
We’re trying to move away from brand safety as a term and discuss brand suitability. Brand safety suggests there’s a right and wrong, but suitability understands it’s more nuanced and personalized. Safety is table stakes at this point; no brand should be sponsoring terrorist content or content below the floor. We know that, now we need people to look deeper and understand what’s right for their brand and their consumers.
We always strive to remain unbiased because brand marketers know their values better than we do. We’re just here to provide the tools to help them translate and apply those values to their media buying. Today, most brands have internal brand values statements or documents, but you’d be surprised at the number of brands who haven’t done the work to map this out or set a foundation. Brands need to know what they stand for and what they don’t, or they will come across as inconsistent to their customers.
Tools like ours can help brands make these decisions more objectively, but only if they have their values framework together and pay attention to where they’re deploying media dollars. Assumptions are generally not representative of reality when it comes to creator channel content; your idea of someone might not be consistent with what they’re currently saying or how they’re saying it. This is why that data layer is critical to helping you navigate investment decisions.
What safety advice do you have for brands that are worried about where they’re advertising?
Podcasting is the most intimate channel. Consumers listening to a podcast feel like they have a deep understanding of and typically a relationship with the host; this person is their friend, and they are loyal. On the flip side, people may have strong feelings about controversial hosts they’ve never listened to. Consistency is really important, especially if your brand is the main sponsor of a program or your commercial plays right before or after a host delivers potentially toxic words. Have a plan in place for how you will navigate this scenario.
Will you react? How will you react? Where will you react? Internally or externally? Will they be the same message or a different message? Will you cancel or just pause? Will you pull your ad? Can you pull your ad? Do you have protection clauses in place in your contracts?
There are so many questions you need to answer to ensure that everyone is ready and on the same page. Brands that have a strategy in place do far less backpedaling and have lower levels of long-term backlash.
Don’t worry—it’s not all alarms and chaos. These are just the pieces that our tools help mitigate. Podcasts and creator content are great channels for advertising, and most of the time, they’re quite safe.
“Brands need to know what they stand for and what they don’t, or they will come across as inconsistent to their customers.”
Are you planning to move beyond podcasts and apply your algorithms and scales to other channels, such as social media or video?
We’re in the discovery phase of figuring out how to make this work with video and social, starting with YouTube. We're currently developing different technology to study body language, which wasn’t necessary for audio or web. In video, reading body language and facial expressions is critical to making brand suitability tools relevant. Micro-expressions can flash and change really quickly, and there’s so much opportunity for miscategorization or false positives and false negatives.
I'm most interested in passive-aggressive and sarcastic delivery and behavior, which will be hard to code because you’re saying one thing, but you actually mean something else entirely, and everyone knows it. Of course, that doesn’t mean AI will be able to easily detect the presence of this. So, right now, we're just really working on understanding body language and visual cues, as we believe these will be the biggest factors in breaking through from audio to visual.
You’re still working with Oxford Road and others as a consultant, but you launched your own project as well that takes you back to your travel roots. Can you tell us about that?
Yes, through my work with ML teams over the last four years I’ve had a front-row seat to learning and building application layer tools — both for enterprise and consumer. We’re only at the beginning of understanding just how to harness its current and future capability, but it has transfixed me entirely. I’ve wanted to solve a problem that many people and I face while simultaneously trying to determine different, possible, future capabilities.
My company is Sprezi, and our aim is to translate taste to code. Right now, I believe the future of application-layer AI lies in two main buckets: taste and trust. Sprezi’s roots are in both. We wanted to start by addressing the hospitality industry, specifically travel and dining, where choices are very personal, subjective, and varied by user.
Our MVP behaves like your personal admin for all restaurant and hotel decisions, aggregating popular and personal data and consistently learning from your choices and behavior as well as the behaviors of users similar to you. There’s an unnecessary cold start problem in hospitality. If you’re researching for a trip or even just trying to figure out the best place to have dinner near you, there are so many different online destinations that you have to go before you start to book anything. Additionally, there are hundreds of places you’ve seen once somewhere and want to remember in real time but have no idea where or how to track them down. For the first time, the tech is at a point where a shift in directionality can happen, resurfacing all of these locations where and where relevant.
Currently, Sprezi gives its (beta) users a more holistic and personalized view of a location, so they can make a decision using just one platform rather than having to check across multiple platforms for the same amount of data. Additionally, it remembers important factors that help shape your personal decisions, such as what you like, how much you spend, and who your friends are, which can’t be said of other platforms.
Last, and maybe the thing I’m most excited about, we’ve been able to build a tool that can solve compound, proximity-based questions on demand. So users can ask questions like, “What are the best fresh and cheap tacos in the West Village that I can get right now?” and Sprezi will deliver a card stack of recommendations that hit each of your required demands. We currently have data across 127 cities, and are growing every day.
Moving forward, Sprezi will be agentic; you’ll be able to say, “Book me and place for dinner later tonight, seven people Santa Monica, around 7:30 pm,” and it will do it for you knowing the locations that you already like, and what is currently available.
“Brands that have a strategy in place do far less backpedaling and have lower levels of long-term backlash.”
AI has made a lot of advances in recent years, but it’s still known for being inaccurate. How do you ensure that all the information your AI spits out is current and true?
Right, chatbots aren’t close to perfect, and they definitely make stuff up from time to time. Our chatbot will ask if you want it to book you a hotel because we’ve given it very specific directives and tuned it to respond in certain ways. It has no real idea that we haven’t built the booking capability yet and that it can’t follow through on that offer. And, last week, it tried to tell me that Chile was larger than Argentina. So, we have a long way to go.
One way we’re protecting ourselves here is by building our own inference engine, so the chatbot will respond in certain ways (which are right and accurate the majority of the time) but our proprietary engine will be surfacing the recommendations which are from a custom-built database, so those will never be inaccurate.
I'm looking at it as a traveler, both personally and professionally, having traveled to over 50 countries and managed large productions in many of them. I know how much effort has to go into planning a good or efficient trip, the varied nature of all of this, but most importantly, the time waste that doesn’t have to be an issue anymore. This is a piece AI has helped us solve.
Once we feel we have a good grasp of personalized taste levels across hospitality, we can begin to expand the set of inputs, helping users solve a wider set of taste-based inquiries.
A lot of people are worried that AI will take their jobs, and others fear it will become too powerful. Having worked in this burgeoning field, do you have any fears about AI?
It would be ignorant to say that I don't. Will AI take people's jobs? Most likely, but every technology that's been developed over the course of mankind has taken humans’ jobs. What I believe more is that it will help save people time and energy and tackle the bulk of the middle/busy work that currently sucks our energy, allowing us to focus more on the starting and finishing points which are so important.
And, yes, I believe there are aspects of AI that, if unchecked or unregulated, could potentially lead to greater risk someday. We’re not there yet, but I think the individuals and teams building these systems need to take it very seriously and make transparent decisions with a greater sense of accountability than we’ve seen to date. I trust that many of them already are.
We also need governments to work together to create real and collective systems of accountability and regulation. Outliers will exist, and we may struggle to combat those as quickly as they arise, but if everybody's acting in isolation with a different set of rules and accountability, it will be more difficult to navigate.
You said at the beginning of the interview that your career has been a little all over the place. Do you now consider yourself a technologist?
I think I always was, but have only in the past decade realized that I’m a technologist. Additionally, I feel very fortunate to have found myself working in AI years ago without truly knowing it at the time. It’s funny to say, but in some rooms, I’m the one with the most experience. I am and will always be a builder, mainly of digital products, either for myself or to solve a problem others are facing. It’s in my bones and what I love doing the most: Seeing pain points, imagining a new future, designing a customer flow, architecting a solution, mapping out confluence of inference, finding the right way to deliver outputs, and so on. All this is infinitely fascinating to me. So, yes, I am a technologist, and I believe I’ll be in a technology role for the rest of my life.
June 4, 2024
© 2024 The Continuum