Return to site

Jasmine Wang on AI Copywriting, Replacing vs. Augmenting Human Labor & AI Trust and Safety

· Founder,Show Notes,Women,US

"I cared about how amazing the people I was working with, how impactful the technology that we are working on, and the last characteristic is: can I influence the trajectory of this to be deployed into the world in a more safe and responsible way? So all roads pointed to a safe and responsible AI." - Jasmine Wang

Jasmine Wang is the Co-founder & CEO of Copysmith, an AI brainstorming partner for marketers. Previously, Jasmine has been heavily involved in AI research at Partnership on AI, OpenAI and the Montreal Institute of Learning Algorithms (MILA).

 Jasmine started out in engineering and research for Lyft self-driving, and Microsoft Research’s Tech for Emerging Markets group. She also received engineering, research, and academic fellowships most notably by Interact, Kleiner Perkins, 8VC, Microsoft, and the Fulbright Foundation.

Jasmine received her Bachelors in Computer Science and Philosophy at McGill University. In her free time, she plays the piano.

Jasmine Wang: [00:00:33] Hi, Jeremy. Excited to be here.

Jeremy Au: [00:00:36] Well, it's so fascinating. I mean, we're part of On Deck, we met through this great community, and I was just really fascinated by your approach to leveraging AI and I'm excited to share not just your journey but also what you see the future is going to be.

Jasmine Wang: [00:00:52] Thanks so much. We can dive more into this, but we're at the forefront of something really huge and exciting, I'm excited to chat about GPT-3 and everything else that's on the horizon.

Jeremy Au: [00:01:03] Awesome. For those who don't know you yet in your own words, how would you share about your own personal journey?

Jasmine Wang: [00:01:11] I grew up in Edmonton, which is the Northern most major city in Canada. So think extremely cold, think oil and gas, highly conservative, no startups, no technology. I wasn't aware that computer science or software engineering was even a path until I was almost finished high school. I ended up going to McGill in Montreal for university, but I actually started out school in comparative literature. I love books, I love writing. I ended up in computer science after attending my first hackathon and spinning up a website for a nonprofit I was working on and I was mind blown at that moment, switching in computer science and really fell in love with natural language processing after spending some time in engineering the base.

I worked at a couple startups, started out Breather, which was a local Montreal based startup then went to Lyft where I was on their self-driving team. Also worked at Square on their capital team in engineering roles and sort of this thread from writing carried through at that point. I became really interested in natural language processing. The fact that we could represent words as vectors was incredible. I did research at Mila, which is the largest deep learning academic lab in the world, it's headed up by Yoshua Bengio who's one of the three godfathers of deep learning, proudly based in Montreal and Canadian. Then worked at Microsoft Research as well as at OpenAI right around when 52 was released.

And now I'm working on Copysmith which is an AI-powered copywriting tool. We use a combination of GPT-3 and other models to help you draft copy. I think of it as you're always-on brainstorming partner, but I won't go give you the full pitch here. That's how I summarized the journey so far. Over time, I spent a lot of time between Montreal and San Francisco, I'm currently back in Edmonton so the circle is sort of been drawn during the pandemic.

Jeremy Au: [00:03:07] That's amazing. And one thing I've noticed is that you've had an interesting journey where you've not only been thinking about AI but also about the nuts and bolts as well as the policy side of it, which is a pretty rare triple hat to be wearing. I'm just curious, what drove that?

Jasmine Wang: [00:03:26] 100%. My fascination around technology I think because I originally came from the humanities was instrumental. I love the pizazz and the versatility and power of technology but it was ultimately concerned with how it impact people. And I know that's almost a trite thing to say, nowadays tech and society studies has become really hot. Every CS student at Stanford wants to have a philosophy minor, but I think it's truly so important and how I chose the specialty and direction in which I went was a few indices. I cared about how amazing were the people I was working with, how impactful is the technology that we are working on, and the last characteristic is can I influence the trajectory of this to be deployed into the world in a more safe and responsible way? So all roads sort of pointed to a safe and responsible AI.

AI is obviously the hot topic of the year if not the decade, lots of talent is flocking to working on AI. And related to that and one of the reasons for that is it's making huge impacts on industry that I don't need to dive into and also has a lot of implications that aren't fully thought out yet in terms of how can we deploy this in a way that's safe and responsible from an infrastructural level, but also on a consumer-facing level. So many different questions here about safe and responsible deployment. I found it both intellectually interesting but it also just satisfied those three things, great people, super impactful technology, and lots of vague or murky questions around, how do we make this technology actually useful for humanity and beneficial?

Jeremy Au: [00:05:11] Amazing. You've also shared that you took classes on the ethics of AI and all that stuff. Where there any specific favorite classes or moments that you had That kick started that journey for you?

Jasmine Wang: [00:05:25] I did take a minor in philosophy and as part of that some of my favorite classes were in the philosophy of AI and philosophy of science. To maybe pick out a few questions within philosophy of science, sorry, philosophy of AI, we actually realized that if you just look at the history of the thinkers in this field, that these worries that humans have had about machines has been present for almost all of human history. This question of, "Oh, will AI replace us?" This was a question asked during the Industrial Revolution.

Some academics and scholars have termed this, the Sisyphean cycle of technology panics. As in, we keep rolling this rock up the hill of, "Oh, no, is this next thing going to be the thing?" And worries about it happen in a quite cyclical fashion, matching the technological cycles. And this is just a pattern that keeps occurring, which is really interesting, which is not to say that AI isn't different, I think it actually is different, this revolution, but it is interesting how every cycle people thought it was different and that this was the one that would displace human labor in a certain way. 

In fact a revolution had never been seen before which is actually in fact true looking forward and retroactively we can narrativize it and say, "Oh, we are happy that we passed that cycle. But that was a really interesting insight just looking into the history of the philosophy of AI, because philosophers have been thinking around the question of AI for as long as the concept has existed of what it implies for natural intelligence.

Jeremy Au: [00:07:06] It sounds like you've been thinking through this a lot deeper than I have because the way I consume is I'm a big reader of science fiction and AIs are always villains on nature and increasingly so protagonist, actually. Some of the best science fiction recently, Ancillary Justice, et cetera. But about these are actually taking AI's protagonists actually in a story and how they discover humanity. So there's a interesting piece and trend I'm seeing. So I'm curious from your angle, like what's your personal philosophy around AI now that you've not just studied it, thought about it, working in it, what's your personal take on it. 

Jasmine Wang: [00:07:50] I'm really excited by AI. I think there's one tape that you could take care of AI I also agree with which is if it's inevitable we might as well collaborate with the AI. Some people have the refrain, "Welcome the robot overlords." And I actually have a different view where I'm really excited about what AI can unlock. So as a writer I spent a lot of time in creative writing. An example of what you might want from an AI. I can't go to a human and say, "Okay, Jeremy, I have this paragraph I've written out of the scene, can you give me 15 variance of all the adverbs that I used, and only the adverbs? Just give me 15 different ones for all the adverts I used in this paragraph."

That's not a good use of your time, but nevermind, that it's also not a good use of my time. It takes too much time to specify that instruction. But if it were possible, if it happened naturally over the course of my writing, of course my writing would be better. Of course my creative would be augmented. I'd be like, "Oh, I didn't think about using that word in that way before." And that's the application that I'm really excited about from AI, is this creativity augmenting application which we'll talk about more when we talk about Copysmith. 

And I really just feel there's so much territory in all dimensions. I'm just talking about text here where we're writing novels or documents. The AI will help us explore much more efficiently and quickly and explore territory that we might not even have arrived at without it and I think this is just possible for all domains. I'm a writer so I think about AI in that way but I think so many people look at AI with this fear that it will replace them. But I think for a lot of disciplines they should actually position themselves as this AI will actually augment my work and how can I integrate this into my workflow proactively such that AI plus human is the default path forward and not AI. I'm excited about it.

Jeremy Au: [00:09:48] It's interesting. So personally excited about it and excited about the augmentation component of it which is really interesting.

Jasmine Wang: [00:09:55] Yeah, exactly. I'm excited about it but I think there are definitely things to be thought through that the AI community and industry more broadly needs to think about it, about the impacts of these algorithms. And just over this past year I think conversation about that has increased dramatically. For example with the release of the movie Social Network, they had actors act out these AIs in a way that made it really visceral. We are doing micro-targeting, what does that imply for our news ecosystems and truths construed very broadly?

But yeah, I think there's both a lot of excitement and I feel... I'm an optimist, so this might not be entirely well-considered but I'm also aware of there are huge downsides that we are seeing already to some extent that we should be careful about mitigating and that companies need to be thoughtful about and incorporate into their internal and external-facing practices, as well as researchers and the broader community. There are so many different stakeholders in the AI community which makes it really unique. It's not just research breakthroughs but research breakthroughs that are productizable and have enormous economic value.

Jeremy Au: [00:11:08] I think what's interesting is that in your LinkedIn you have this statement which is a big one which is like say, "By default economists and AI researchers believe AI can automate away all human labor. With my work in research I need to help create a future where AI is beneficial for everyone." So what's interesting is that you actually imply that there is a fork in the road, I mean, a fork in terms of like everyone's belief is, but you're also saying that you are going to help steer it in the right direction. I'm just curious, what do you think are the things that would create that fork?

Jasmine Wang: [00:11:42] This is a big question and something that I'm mulling over. I would actually point to another entrepreneur here slightly less famous than me, Elon Musk, who's working on Neuralink and his express purpose with Neuralink is to keep humans updated at the rate of AI updating. That's one extreme example of human AI augmentation, literally embed a computer inside your brain so that you can interface with AI very closely. And I think there's things along that spectrum.

 So that's one side of the fork, is collaborating with AI, leveraging AI in your workflows, whether that be interacting with it as a piece of software or embedded in your brain, there's a spectrum, but there's one fork where some people either reject AI out of their own volition or unable to access it. And we see this fork doing like employees have to choose between say using AI or just using human labor.

And I'm not an economist, this is just me talking to economists. By default, if AI can replace all human labor, which is what AI researchers believe, it will over time be cheaper than human labor. Therefore, the only economic incentive I understand is you always use the cheapest if it's the same quality. So by default all companies will be economically incentivized to employ AI labor instead of human labor.

The question that I've been really sitting on is how can we make humans economically viable? extreme solution is Neuralink, which is make us this next generation of humans that can interact with AI, but I think there are other tools to be built as well. And companies can be built positioning themselves with respect to these different forums. They can position themselves as we're going to automate OA jobs, or you can position yourself as no, I want to make humans better at their jobs, reach new heights that you haven't seen before, make them way more productive, and I want to position myself to the ladder fork. And even if it's not long-term economically possible I think it's on us to try and I want to make a serious effort at it.

Jeremy Au: [00:13:54] Yeah. I mean, I think you've got the fire power and the trajectory to do so and to make a difference. And I think what's interesting. I mean, for myself I trained as an economist as undergrad. My honours thesis was on how technology adoption diffuses across the world and the speed of adoption.

Jasmine Wang: [00:14:09] Great, you can correct me on everything.

Jeremy Au: [00:14:12] Yeah. I hope this goes better for as an MBA is very much how do we leverage this? And like you said, not necessarily go for the cheapest but increase the profit, right. I think obviously what's interesting of course is AI still seems like it's very much in early days, I mean, it's very much been a very esoteric, domain expertise where people just couldn't access it and over the past five years as founders we've seen like so many people start building what are called bridges between AI and real life world cases. I mean, you don't need to be an engineer... Well, you had to be engineered to really understand a Tesla self-driving car, but you will consume it. But now we're starting to see a trickle into B2B and SaaS for which Copysmith is one of them. What do you think is driving that bridge and growth?

Jasmine Wang: [00:14:56] Again, please. Correct me. Pull again on the economic incentives here because I'm likely getting them horribly wrong. But I think in terms of converting research, seeing tech transfer from the research pipeline of basic research all the way to something to product, basic research needs to deliver results that industry is excited about and then therefore industry will follow. 

I think it is only recently that AI has really delivered state-of-the-art results at a research level. We're like, "Oh, image classification just works now," or, "Text generation just works now." And then industry seeing those results or seeing those papers, and obviously these aren't binary, there's many, many AI researchers in industry now and many companies have massive AI research labs precisely to leverage these sorts of insights, but they create infrastructure around that. 

It's, oh, I want to deploy more computer vision models and therefore there's going to be more APIs and service. I mean, Amazon makes so much money off of AWS, of course they're going to offer Vision API that's like quarter of their business, as well as infrastructure to help folks develop more easily. So I would highlight one startup that I really like and admire is Cortex which was a YC open-source startup that allows anybody to deploy machine learning models as an API very easily. It was just a YAML file.

This is a poor answer here and quite then but I think it's very recent that deep learning, well, as a revolution has happened, and then very recent that we've had the state-of-the-art results that warrant significant industry investment and follow-on effects from folks who have less and less experience directly with the technology but can build on the infrastructure from those who have come before.

It's quite an interesting process to see coming from a background of studying how science progresses because science is also very incremental. But seeing this for the first time I'd seen an activity around AI, you also see the other way or how that research extended also incrementally because now for example with GPT-3 where you have an API that handles all your auto-scaling for you, has really fast inference time, you don't have to do any of it. You can just treat it as an API. Folks are building no code apps around it, and we're just really seeing how these different pieces can meet and how infrastructure can be built up in an ecosystem to bring it closer and closer to folks who have less and less expertise, which is amazing, I think. But I don't have a complete theory around that, it's just the research is there and the economic value is there so people are coming.

Jeremy Au: [00:17:27] Yeah. It's interesting to see that trickle and it's this keeps increasing. As a kid, I used to be playing MUDS multi-user Dungeons on Telnet. And so you play a text and that used to be crafted by all of us. Everybody would be contributing different rooms about that and for the past few months I've been playing AI dungeon GPT-2 to GPT-3 and it has been a blast. And one thing I think about a little bit is if people could download the transcripts they'd be like, "Wow, this guy is really working hard to break the game."

And I think it's interesting to see that consumerization is making is easier and easier for AI to be assessable, especially because AI is also trickling into a lot of the backend and it was invisible to consumers because it's better ad-targeting, better personalization of feeds, better video curation. And I think it's also interesting from a different angle, which is, we used to make it as joke at a time as like AI is a statistic. right. It's in the past they used to discredit AI company and they'll be like, "Oh, you're doing regressions." And I'm like, "No. We're doing analysis which I'm happy for you to use."

But I think now we're starting to see companies actually really use AI because it's so available as a consumer, as APIs. What are you excited about? What trends do you see for AI rushing into and transforming more? I mean, one of course is marketing.

Jasmine Wang: [00:18:59] Right. As a domain I was actually going to mention specific type of machine learning. Right now we've seen obviously huge advances in NLP and text generation. And what I'm really excited just from an intellectual perspective is multimodal generation. Can we generate completely new images and also caption them and label them? That unlocks huge data sets potentially for folks who are trying to do, for example, self-driving car safety. So I find that hugely fascinating.

I think the domains here that I would highlight aren't showstoppers or aren't surprising. I don't have a contrarian take here, I'm very excited for self-driving cars, I think that will have a huge impact. I think it's closer than we think since things keep getting delayed but I think it is going to happen within the next decade, so I'm extremely excited about that.

And I'm actually quite worried about increased personalization on the web for a number of reasons but I think we are going to see it. I think we're going to absolutely move towards a world where it's not just ads that are personalized for you, but entire landing pages, website journeys that are dynamically generated and generated only for you, for a segment of one. There are companies already working on this but it's rare that the content is dynamically generated which I think is actually a qualitative difference where you might see a webpage and only you will be able to see it.

You'll be able to share that URL but it only been shown to you and maybe the founder themselves wouldn't be surprised as to the content that's written on that page. But I think we're just going to see increased aggregation of data on a specific user and following their path through the web in a way that's unprecedented and potentially a little bit scary, but it will lead to higher click-through rates, so there's a balance here.

Jeremy Au: [00:20:48] That's exactly similar to what you're tackling, right?

Jasmine Wang: [00:20:51] Yeah. We're not doing entirely personalized on any pages that I can tell you about what we are working on, I'm quite concerned about that use case. I think it's quite unlikely that we would go down that route. I think of Copysmith very much as like an always on brainstorming partner. Right now what the product looks like is that you can plug in things that you know about your product and actually pull them directly from your website. Like what is just a simple description of what you do, and with one click you can generate all types of content like ads for all your different channels, Google, Facebook, Instagram, et cetera. You generate product descriptions, SEO meta-tags, different landing pages for different audience segments, blog posts, the whole nine yards.

But it's not entirely personalized, we're not going down that route yet. My main goal with Copysmith right now is how do we get users away from staring at that blank page? And the next step is how do we get users out of Google Sheets? Which is another problem that's not AI-related but just a managerial problem of people are not using a document type that's purpose-made for copy. When they're editing Google Ads, editing Facebook Ads. It's in a Google spreadsheet most of the time and they ask their boss for feedback. And if you've ever done comments in Google spreadsheets you know how horrible it is. You have to find the little yellow triangle and a very tiny cell and hover over it.

I see what we're doing as extremely different from precisation but we're still focused on marketers. We're focused on unlocking their creative so that they can get to first draft much more quickly, A/B test much more quickly, and really have a full feedback loop of, okay, I can use this tool to go to my managers, advocate for a certain campaign such as an ad spend, come back and actually evaluate, "Okay, through Copysmith we did this course copy-based A/B test. This is statistically speaking just probably the best. Let's go sink more money onto that particular ad set and really complete the feedback loop."

This sort of ad and build that data set horizontally across industries. This is the sort of ad that we've seen perform well for companies like yourself. Agencies do this curation for a huge segment of ad spend because they have some expertise of what does well and what doesn't, but you could scale that up massively with the exponentials that tech companies have access to. 

Jeremy Au: [00:23:10] Why are you personally excited about in building that out? Why are you excited to be building a platform?

Jasmine Wang: [00:23:15] I am really excited about AI assistant writing. You've probably heard a bit of the throw line, I'm a writer. I spent a lot of time writing 80,000 words into my first novel which I also started during COVID, very generative period for me, pun intended. I'm really interested in the different ways AI and humans can interact. So the part of the product that I'm personally really fascinated by is, and really motivated by is when I interview users and they're like, "Wow, I didn't think of that before." Or, "Oh, that's really interesting. Let's plug that back in to Copysmith and see what it generates."

So I'm really interested in this path of, oh, I started here and I've just really have no idea what this campaign is going to look like." And ending up after a few iterations a million miles away from where you started and it's territory that you might not have cover or you might've taken, it does take hours and days to cover without this brainstorming partner. And I'm excited for this sort of tool being available for everybody, not just folks who can access the world's top agencies for tens of thousands, many, many zeros of dollars. I'm excited for folks to be able to access this brainstorming power no matter what size they are because it really is a super power and it's a great equalizer, I think creativity. A creative company does much better than a company that isn't compelling, isn't no, is different.

Jeremy Au: [00:24:37] Yeah. I always tend to get started writing on NaNoWriMo, which is what I'm writing on, and Writer's Block is always a big one when I'm manually writing. So yeah, I'm excited to see AI goes to help me take me across the finish line.

Jasmine Wang: [00:24:54] Yeah, totally. Take you across the finish line or get you to the first step which is also really hard. Editing is a lot easier than writing.

Jeremy Au: [00:25:01] Oh, for sure. Definitely. What's interesting is also that as you build this out you've really have tackled the marketing as a use case. I mean, why do you think marketers would want something like this or tools like this or AI like this?

Jasmine Wang: [00:25:15] We think this because they tell us this. So many marketers are very technologically literate, very ad literate, and they've been hunting for something like this. I wish this were possible to be able to draft copy with that. It's a task that marketers have to do day in, day out, especially if they don't have an in-house copywriter. And a lot of companies don't, they either outsource to an agency or they have a part-time freelancer, it's only companies that really care about their brand voice and messaging where they hire a full time in-house copywriter. 

So we've really seen it in interviews. I don't have a background in marketing. Few of the wonderful folks on our team have agency backgrounds, backgrounds in marketing so it's really a user pull versus me pushing and building for a marketing archetype or persona that I have in my mind, because to be honest with you there isn't one. I haven't done much marketing except for my own Shopify store which I started last year while I was at open AI, and made me start thinking about marketing in the first place, but I'm a horrible marketer. That Shopify store completely failed because I'm horrible at marketing.

So I'm building for an archetype that I don't have in my head but rather comes to me every day with a feature feedback. We have a community sock of around 200 people by now. They're like, "Jasmine, I need this." Or, "Can we do a user interview on this flow because I don't think it makes sense for like my agency use case where I'm managing 30 clients." So definitely straight from folks who do this sort of work every day, and then very much a pull versus a push product process.

Jeremy Au: [00:26:52] How do you approach those conversations with marketers? I mean, do go in and it's just like, "Hey, what do you wish AI do for you?" Because it's a tool but at the same time it's a big catchy word as well. I'm just curious how you approach those user interviews?

Jasmine Wang: [00:27:11] Yeah. The initial interviews that I had with folks who were more like, "How do you get ideas? What does a brainstorming session look like and what is entire conceptualization process from beginning to end of a campaign? What do you do?" And it turns out that a lot of people have writing sessions that they do collaboratively for an hour or more at a time. They block out these chunks where they're just with other folks on the team where they're, "Okay, let's just turn out 10 variants of Facebook headlines." 

And that was super interesting to me. And now that is something that exists in Copysmith, where you can just plug in keywords and you instantly get a dozen variants of Facebook headlines which would otherwise have taken that hour of work. So really at the beginning of user interviews I was looking at, okay, what is the entire process? And this was largely also because I have no background in marketing as especially at like a high-growth or a more established company. 

So what is this very complex multi-stakeholder function? Who do you talk to? What does your day look like? So very much a discovery interview and as a product I became more involved and we're now doing more design interviews we're like, "Okay, what were you surprised about in this product?" Or, "What were you disappointed by? What are the things that made you go, 'Oh, wow, I'm curious about that and I want to do more of that?'"

So there's definitely clear segmentation where initially I was just trying to figure out, okay, what does my audience and user look like? And then afterwards, after something existed doing more traditional product interviews where we were watching people use the product, understand where they got stuck, any moments of uncertainty or confusion or wish for more.

Jeremy Au: [00:28:50] What's interesting is that not only are you able to solve one problem which isn't in writing ads, but descriptions, metadata, landing pages, blog posts, where it's just mind-boggling because historically from a human perspective all those people are five or six different people specialized in each thing. But to you it's this same output, it's this different variations. Do you think that new approach of how to handle things, do you think they could unlock some new... I don't know, job roles, I guess? Like AI shepherd.

Jasmine Wang: [00:29:23] 100%. How I position the Copysmith customer, and actually I'm going to introduce a new verb here. I named Copysmith very intentionally. The Copysmith being similar to silversmithing. So you put in this part part you get out something that's useful, but it's up to the human ultimately to polish it. Whether it be, oh, something about the specific brand guidelines and you're not able to tell an AI your 20-page brand guideline that's in a PDF on possible format. Or you know of a specific promotion that's happening, or I think the most important thing you are entering into territory that your company has not gone into before.

So even if we understand, okay, this is your company context because we script your website and we know what you've done before, maybe Coca-Cola wants to go a totally different way for 2021 as everybody does. That is something that humans are uniquely good at. And I think I wouldn't call it an AI shepherd, I'm not sure what I would call it. We've been calling them copy smithers, folks who do copy smithing, but I definitely think there will be new roles for folks who learn how to work with AI extremely effectively and really embrace this new paradigm and way of creating whatever it is that they're doing creatively. 

It doesn't have to be just copy, they could be composing a piece of music. They could be trying to architect a new building, there's a lot of creative processes that I think there is a lot of manual labor right now that is quite tedious. It's really, really hard for a human to say the same thing in 10 different ways, we're just not programmed to do that. We get used to using the same patterns because they're efficient for us just in terms of something like computational space in our heads. We just don't have that much memory. AIs are really good at that, AI is not good at knowing what is true and it's not also not forward-looking.

I think very much the role of the human in the workplace will evolve to knowing where we are headed, where we want to go and pointing this very powerful engine in that direction and shaping its output versus doing the day-to-day, I wouldn't call it drudgery, but just very heavy-lifting of doing this racking your brain for different variants. So shepherd might be an apt term, I don't have a better term like navigator...

Jeremy Au: [00:31:43] Wrangler.

Jasmine Wang: [00:31:44] Yeah, wrangler, yeah.

Jeremy Au: [00:31:47] Yeah, what's interesting is that we definitely see a ton of these AI and you touched upon something that is really interesting which is that AI is really looking at the past and we're trying to wrangle and doesn't have a good sense about what we're trying to aim for.

Jasmine Wang: [00:32:02] Yeah. I sometimes say this to friends who don't understand AI or feel threatened by AI and acknowledging that AI is the biggest thief. It has stolen all of humanity's work. It has been trained all of the data that humanity has ever produced. And when I use data that's a blunt term, but think Bibles, think all the holy texts. Think everything important that anyone has ever written online, everybody's Tumblr blogs. Everything that is important and vulnerable and people's best works of their lives AI has all of it.

And we should not therefore in my opinion feel ashamed or taken aback when AI can write at the level of a normal human. I'm almost like, "Of course." You have been trained on all the data and using you and in a personified way but AI has been trained on all of this data that is so important to the history of humanity. Of course it can write in a way that it can look like a normal tweet or something, but it's also fundamentally limited in that way. It is purely looking to the past. It's hard to keep models up to date. 

Like GPT-3 for example only has data up until 2019. It knows nothing about COVID. Maybe while I'm not an expert in terms of... I'm not a high-volume marketer, but I imagine that messaging has changed hugely in terms of how ads perform and what messages resonate with people over the course of 2020. AI cannot keep up in terms of the day-to-day changes nor can it predict where our sense of taste, what we value as a species, what we care about and want to think about going forward, and that's where humans come in. I think there will always be a role for humans because of that reason, not just for a curational, sorry, curatorial purpose but as a values-defining and trajectory setting in a much deeper way in that direction setting sense.

Jeremy Au: [00:34:07] Are there approaches on how you would make it more forward looking? Do we have a checkbox where we say, "Hey, AI, I know I've really liked doughnuts over the past 20 years of my life but for the next five years I would like to lose 20 pounds, so could you spare me in this angle and send me more stuff around nutrition and better fitness?"

Jasmine Wang: [00:34:29] I mean, that's super interesting. I would say yes just because I can imagine algorithmic systems doing that, and I'm not an expert on recommendations systems or algorithms, but there have been a lot of complaints from the humane technology community of technology responds to my wants but not for my needs or my long-term vision of who and what kind of person I want to be. I imagine this is technologically possible, but whether it's economically viable or desirable, again the question of economics which I'm not very familiar with.

I'm not sure if it's economically incentivized for Facebook to help you, Jeremy, lose weight unless there's something they can sell you around that. So I totally buy that it's technologically possible and you probably don't even need AI because that's a very explicit set of preferences that you'll be giving a system. It's like, "Hey, Facebook, you've been showing me a lot of ads for donuts, I don't want donuts anymore. Please show me ads for solids and weight loss plans." I'm sure Facebook, if they so desired would be able to do that technologically. And the question again comes down to economics.

Jeremy Au: [00:35:38] Well, I mean, I think economics is there. I mean, people spend billions of dollars on gyms which a lot of is unused. They spend billions of dollars on salads and all kinds of high-level aspirational stuff, so I think, yeah, definitely there's a lot of stuff there and of course it may not necessarily be business-generated but it'd be more user-generated. Because Dunkin' Donuts' job is to sell donuts in their perspective, whereas the consumer knows best about their future direction, right?

Jeremy Au: [00:36:03] ... and how do we transcend that. Have you ever thought about AI as coaches, I mean, coaching services?

Jasmine Wang: [00:36:09] Yeah. I've seen a few interesting forays into this space and I've actually had a lot of coaches. I've had a writing coach, productivity coach, life coach, the whole nine yards. And a lot of the value that I derive from having a coach is having the sense that someone is listening, not the exact content of what they tell me. So I'm very curious as to... I haven't explored a lot of these AI coaching solutions but definitely one question if I were approaching them from a perspective of the user or potential angel investor, is what sort of value do you think people typically derive from coaching and what do you think people would want to reap from your service? 

 In terms of content my background in NLP is actually not in general NLP but in dialogue models so I've thought about the coaching use case before. You could probably personalize it to some extent, you wouldn't be able to personalize it down to a user segment of one just because of the costs of if you wanted to fine-tune the model and it maybe just to back up very quickly. Natural language processing models only have a certain window in which they can retain information. You can think of them as extremely forgetful human beings.

 

So you can only have a very short conversation with a model where it remembers everything that you discussed. You only really have two options. You could fine-tune a model on a user, just know all the information about them and therefore you don't have this context window, that sort of particular to a particular user but rather one model one user. This would be far too expensive. We don't have methods for doing this yet, well, depending on the pricing models of these AI coaches where we could do that, or you just have frankly a crappy coach who forgets what you told them yesterday.

 

So right now I don't think it's super viable but I'd be very happy to be proven wrong and I think the world would benefit from having coaching on us, I think it's one of the things that has unlocked a ton of value in my life and I wish more people were able to access. I'm excited about it but I definitely think there are still challenges both from just a product perspective of I care because a coach is listening to me, that's the value derived from coaches. Also some technical/business margin challenges as well that go with the products.

 

Jeremy Au: [00:38:26] That's so true. And I think it's interesting that you framed it up as not just the output which is like telling you to do A or B but also the listening aside, that countability, companionship side of it. And it's interesting, I mean, we definitely see a lot of different approaches right now with Replica and other approaches that are really working the angle of not necessarily accountability but definitely companionship.

 

This reminds me of Tamagotchis and Neopets back in the day, right? Where you stick your face on it. Every AI is childlike right now so that you can forgive it for being forgetful. So there's a huge empathy component. And one thing I always remember from my friends, of course they're dealing robots and everything is easy, examples of great empathy with AI or robots and there's also a lot of violence against AI and robots. There's a weird bifurcation or dispersion of that human reaction to it. What do you think about that?

 

Jasmine Wang: [00:39:23] This is a super interesting question and I'd be curious having not read the studies on the topic if there's overlap between those two audiences or populations. Is there a bifurcation where a population treats robots and anthropomorphize agents really well and then some who treat them horribly? It's interesting. I'll try to offer two unintuitive thoughts. One, I think making it easier to empathize with AI has its dangers or downsides, it's not necessarily a good thing. Because it might allow us to or prompt us to trust the ad more than is warranted. 

 

For example, there was a study, I forget the author now, where a bot was let into a Harvard dorm because it simply requested access. It was like, "Can you let me into the dorm, I'm making a delivery? And something like 70% plus of the students let them in. And this is a highly guarded campus where they have carded access on the doors and it was meant to simulate very violent things like a bomb threat, like a robot could have gone in and easily violated the security of that building. And it was a smiling robot.

 

So scenarios like that where we really need to make sure that trust is well calibrated and trust is warranted by a system before we trust it and to use more literal economics, I don't think a cute face is an accurate signal for that sort of trust. We need to trust that the system is reliable. There have been different systems set up for certifying the safety of certain systems in other industries. AI might, and under my impression should, move towards that sort of model. And those sorts of signals are real signals for safety. We would not get into an airplane just because it has a smiley face on it, but we might trust a robot because it has a smiley face on it and that worries me. So that's one unintended thought maybe we shouldn't be empathic with robots.

 

 And then for people who are really violent with robots there was a study by, I'm not sure who posited this, but they said that one of their concerns around people treating anthropomorphized robots violently was that it might cause them to lower their standards for how they treated other humans. So robots if they're made to look like humans can feel like moral patients and if we just violate that by beating them or treating them terribly. There was a stance that we might therefore treat humans in our lives really poorly or the lower standards just generally for how we treat other sentient beings. If we know that this robot is not sentient but looks somewhat sentient, it might affect and color our perspectives of our perspectives of other sentient beings.

 

Jeremy Au: [00:42:01] That's really interesting because there's so much difficulty and I think the good insight that I have is that maybe there isn't a difference between those who care about it versus those who do violence and maybe that empathy is driving their violence. It is actually a really good insight, by the way. I will make sure to link to all those philosophers and papers on the website transcripts.

 

Jeremy Au: [00:42:26] Awesome. Well, thank you so much, Jasmine. I really appreciate you taking the time to share your journey.

 

Jasmine Wang: [00:42:30] Thanks so much, Jeremy. This is a fun conversation where we talked far too much about economics, the subject that I know nothing about, but yeah, very fun. Thank you so much for having me on.

 

Jeremy Au: [00:42:41] Well, I look forward to doubling down and revisiting this topic and see how our predictions and thoughts have evolved.

 

Jasmine Wang: [00:42:49] We'll likely be hugely wrong but it's good that we make predictions, we become better at forecasting. Thanks so much, Jeremy.

All Posts
×

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!

OK