80. How AI is Impacting Marketing Research
00;00;06;23 - 00;00;32;09
Meagan
Hi. Welcome to Dig In the podcast brought to you by Dig Insights. Every week we interview founders, marketers and researchers, from innovative brands to learn how they're approaching their role and their category in a clever way. Very excited to have Mike, Joel, Kathy and Julien here with me today for what has proven to be a very popular topic.
00;00;32;11 - 00;00;50;03
Meagan
I think this is the highest registration volume we've had for anything we've launched at Dig, so I'm really excited to to have this conversation. And it seems like other people are as well. But I wanted to bring these people together to specifically get down to brass tacks when it comes to what does it mean for the market research industry?
00;00;50;04 - 00;01;15;10
Meagan
What does it mean for researchers specifically? And you can see here, these people are incredibly knowledgeable. They're founders of research based businesses. Mike leads or is the founder of Insight Platforms. He gets a chance to see sort of what's going on across the market research landscape. So I wanted us to sort of start with that. Mike. Talk to us about how you're sort of seeing I applied across the market research space.
00;01;15;17 - 00;01;40;14
Mike
There's been a huge explosion of interest in AI, but it's, you know, we've been using it a lot actually in research for quite a few years. So in a lot of ways it's kind of mature on the whole charged B2B thing focused people's attention. But there have been apps that have been built specifically for research tools, for, you know, text analytics, image analytics for predictive models.
00;01;40;16 - 00;02;23;00
Mike
I think the distinction here is the new wave of generative AI tools. So it's it's actually about taking an input prompt and creating something new. And there's some really interesting stuff being done with these new built on these large language models really around, you know, being able to summarize qualitative data. You know, this was really hard before. So you feed in transcripts of groups of interviews, you get a coherent, a decent summary of the output, you know, conversational surveys and, you know, sort of large scale discussions that are that are kind of partially moderated by bots that's happening, you know, drafting content.
00;02;23;00 - 00;02;33;16
Mike
So being able to create, you know, reports and things like that. So there's a lot there's a lot that's happening with huge amounts of innovation over the next few months, I think that's for sure.
00;02;33;19 - 00;02;36;13
Meagan
How are we sort of leveraging AI in that space?
00;02;36;15 - 00;02;55;23
Julien
We actually did an innovation concept test recently with one of our clients ideas, and they had spent a week kind of generating going in a lot of directions. We took their ideas and then we generated or just generated a few ideas. And the ideas actually did wind in, in our, in our.
00;02;56;00 - 00;02;59;01
Meagan
Okay, so they did win. Okay.
00;02;59;16 - 00;03;28;06
Julien
That was pretty, pretty exciting stuff. And the way we've come about it on the call side is was kind of immediately applicable today. We don't need to necessarily think what's but you know, five years down the road it's one of the simple fixes or the simple augmentations that we can apply right now. So this week I was talking to one of the analysts on the team and I said, Oh, have you used ChadGPT this week?
00;03;28;09 - 00;03;54;19
Julien
And she said, Well, I was writing an email and there were a bunch of kind of components I needed to summarize, I needed to say, but I couldn't find the way to assemble my ideas in the way I needed to. I threw it into ChatGPT and it came out beautifully and I had a few tweaks to do, but it took what would have taken me 30 minutes of assembling, reassembling into a three minute done, you know, application.
00;03;54;21 - 00;04;07;18
Julien
And so we are working with Joel's team and using it on analysis by using it to augment, using it to idea, kind of like how your team's using it. But where I see the benefit is in those little moments that our team is using it for.
00;04;07;20 - 00;04;25;11
Meagan
One of the things I loved Kathy about talking to you is that your team is sort of hyper focused on the chat bot and how that can be leveraged. And what I'm hearing you say, Juien, is like you want to focus on the here and now and you know what we can actually fix in the moment or what we can augment in the moment.
00;04;25;14 - 00;04;32;25
Meagan
But Kathy, I wanted to hear from you. Like talk to me about why your team has focused so heavily on this sort of application of A.I..
00;04;33;01 - 00;05;03;05
Kathy
Our team have been very focused on conversational AI. Originally, the idea was we really wanted to bring that engagement to service. We wanted participants to to want to participate in surveys. We tried many, many different things. And finally we decided that it's the conversation that's the real, real thing. Other things might some clients might feel like kind of gimmicky, but the real conversations, the real thing that people actually really, really enjoy, I think conversational AI probably is quite self-explanatory.
00;05;03;05 - 00;05;34;19
Kathy
It's conversation. But in the context of market research, I think we actually play a quite different role. Basically conversational, generative AI Right now it is being able to do more of the creative tasks much more than before. So conversation and in the context of market research, like Julienne, what you said for qualitative research, a question asking aspect is actually surprisingly very creative to ask the right question in the right moment.
00;05;34;19 - 00;05;49;16
Kathy
That requires a lot. So yeah, that has been what we've been hyper focused on. And also at the other part of it is now we have a lot of conversational data how we're going to make good use of it, how to analyze it in a way that is efficient.
00;05;49;17 - 00;06;13;17
Meagan
I heard someone recently on a podcast say it doesn't have to get you like 100% of the way. Like if it can get you at least from a marketing perspective, if you can get you like 90% of the way or 80% of the way. Like I know for us we're using it for SEO articles. So we're working with a company called Jasper that has an awesome product that essentially helps us really quickly create content.
00;06;13;19 - 00;06;37;28
Meagan
But the content is not finished. It's just coming up with sort of those paragraph blocks. So I think, yeah, across marketing and research, it's great to sort of lean on it from that perspective to get your, say, 80, 90% of the way there. I did want to touch on you said, you know, it's shocking how challenging it is to develop those, create or develop questions or ask questions in the right way.
00;06;38;01 - 00;06;58;18
Meagan
So do you feel like with the work you've done on generative AI for the chat bot experience that we've gotten to a place now where the topic can ask questions in the right way? Like, where are we at? I guess as someone who's kind of naive to A.I., like where we are in, not in that space, do you feel like you could use a chat bot to ask the right questions in the right context?
00;06;58;25 - 00;07;29;23
Kathy
I think yes, we can. We can actually ask a really good questions. So our product, we have two tiers. One is the standard tier, the other tier is the premium tier for the standard tier, we're seeing 90% of really good questions for the premium tier. Basically, this is where if this objective is very specific, if the topic is very specific, for example, if we're asking doctors about a new device, obviously I wouldn't understand it's a completely new device, new term.
00;07;29;29 - 00;07;58;09
Kathy
So that's not possible for a AI to be able to ask the right questions. But the premium tier would allow the AI to learn just on the spot right before the study goes into a field to give that information for the AI to be prepared. These are the kinds of things that people will be talking about. So it's really exciting in time because I, I wouldn't say the same thing like half a year ago, but with the boost of generative AI, we're really seeing good results.
00;07;58;16 - 00;08;22;23
Joel
But there's so many open source models and other models that are that you can adopt and there's so many tools out there that you can you can look like you just said, you can you can, you can get them to you can fine tune them for your specific application and then bring it give it that knowledge of some, you know, hyper specific use case scenario that hasn't been developed or might be a new innovation because obviously it is market research.
00;08;22;23 - 00;08;39;21
Joel
A lot of what we're doing is testing new products and new product ideas and we have to explain them to our respondents before we ask them questions about it. And if we want the AI to be doing follow up questions or to help us with the report, it helps to give it context and tell it what the, you know, explain to it what the tool is like.
00;08;39;23 - 00;08;59;22
Joel
And then if you want it to build specific tools around, you know, getting it to follow questions in that way, then then like you said, you can you can you can actually do that. You could either whether that's fine tune training on the fly or whether that's other, you know, context prompting and everything. There's lots of different tools out there.
00;08;59;22 - 00;09;26;20
Meagan
I do want to touch on because I want people to sort of walk away from this conversation with a sense of where they need to go next in terms of like, where should I explore how useful AI is to apply and I feel like we've gotten there from like the positive side of things. You know, definitely consider it for a brainstorming perspective, definitely consider it in a couple of other scenarios, but where do we really think that it's just not mature enough yet?
00;09;26;21 - 00;09;37;12
Meagan
Like, is there are there areas that you would say, you know what, It's not even really worth exploring how you could apply for this specific research need. For instance.
00;09;37;16 - 00;09;55;19
Joel
Have a good use case theory for there because we do like we've already talked about idea generation and we've done that. A lot of our ideas that we test for clients are written ideas because you know, we're generally doing idea screening very early in the innovation process. So before they have concepts built up and paid for agency artwork and all that.
00;09;55;22 - 00;10;31;26
Joel
And so a lot of them are written ideas and, and A.I. does great. We've been talking a lot about how AI is, is we can use AI for for text, but the image images and in fact image actually is that a really good like there's astonishing amounts of cool stuff you can do. And one thing I'm doing, I'm kind of like a pro-bono side on the side is to it is using image more for we're using object detection AI to detect landmines from drones, which is pretty cool.
00;10;31;28 - 00;10;59;02
Joel
But so aside from that, so there's really mature AI with images but, but in terms of image generation, a lot of people have probably heard of stealth, stable diffusion and mid journey and open eyes Dolly and, and those are really mature but, and people are creating incredible artwork but they're really it's mostly trained on kind of like almost like artistic stuff and not like product based.
00;10;59;04 - 00;11;33;18
Joel
And so I've tried, you know, proof of concept work to see how well, you know, can we generate an image like, can we take our clients chip bag or pop? Can or something like that and, and adjust the, you know, give it that a can and then say update the tagline below. And the AI for image generation is not is not there yet at all, like in any way, shape or form for doing for, for adding, for like photoshopping text onto an image so it can create like an amazing landscape with like a, you know, or an astronaut on a horse or like that on the moon, you know, like you can like it can
00;11;33;18 - 00;12;03;01
Joel
do incredible. Sort of like across functional artwork, but it can't, it can't take like your, your can of, you know, cola and then put a different tagline on the bottom for it because it just doesn't understand the English language and like how we would there's sort of like a disconnect there. And so or even to create that can in the first place, if you give it a specific brand and say create a new brand for this, it will it doesn't do a great job with that.
00;12;03;03 - 00;12;22;12
Joel
And it's, it's and it's because the training data you know, it's not trained on that type of data. But as we get more and more mature in that area as a field and when I say we, I mean the AI community, not not, not necessarily like the market research community. I'm talking more broadly like tech. And as we get more and more mature with that, you know, that'll definitely be there.
00;12;22;12 - 00;12;35;01
Joel
Like that's probably coming in the next year or so, but, but it's not there now. And so image generation for idea, generation with images is not there. It's not mature enough as a as a, as an area.
00;12;35;05 - 00;12;55;23
Mike
It's not so much about what are the things that it's it's not capable of right now. For me it's about bringing a healthy dose of just be careful and kick the tires around everything that you're working with because, you know, like Joe says, you know, what's the training data? One of the sources, there's a few questions coming in here about bias.
00;12;55;23 - 00;13;30;19
Mike
You know, depending on how things in know models have been trained and built and where the source is coming from, you know, it is getting better at actually, you know, referencing the some of the input sources. But it's still you need to be very, very careful about taking, you know, factual out at face value, you know, So there's all sorts of things that you need to be quite careful around and not say, you know, think we have this weird expectation relationship with technology where we see something that is cool and then we instantly expect it to be like way cooler than beyond.
00;13;30;25 - 00;13;44;13
Mike
And so it's totally it can't do everything for us. And, you know, you still need to see it as actually some things are fairly basic building blocks of your own workflow. They're not substitutes or, you know, magic ones.
00;13;44;19 - 00;14;03;11
Julien
One, one thing that keeps coming up over and over and over again is this need for critical thinking. And it's and it's really easy to get tricked or to trick yourself if you're using it to say, I wanted to validate an idea. Well, the you know, the chap told me that this is valid and, well, what was your prompt?
00;14;03;18 - 00;14;32;01
Julien
How did you what did you give it to get back what you got. And, and it's just there seems to be so many opportunities to just go completely off the rails depending on I'm sure a lot of the people listening as have read that New York Times article where the reporter got into this whole romantic escapade or to separate from his his partner.
00;14;32;03 - 00;14;58;26
Julien
But anyway, there's ways that it can derail that. It can derail the basic steps like it can, you know, turn this to an act of voice and then it just does or it does something slightly different. So I think, you know, when it comes to what it can do, it's more about what is the level of literacy required to use it effectively and to use it in a way that's not going to lead you, especially as a market research.
00;14;58;26 - 00;15;19;22
Julien
As a qualitative market researcher, I don't want to put something incorrect in my report and and how much is my team or how much is any team relying on what is being generated? And so there's just this to kind of go back, that needs to be a lot of literacy, a lot of training, a lot of critical thinking that goes into interpreting whatever gets better.
00;15;19;24 - 00;15;23;24
Julien
Because I think, yes, really going down that rabbit hole.
00;15;23;26 - 00;15;49;11
Meagan
And in that sense it because I think there's a concern. I don't know if it's within the research community, but just in general, there's this like doom and gloom of like A.I. is going to replace, you know, human jobs like, immediately. And I think the point you're making is and I'm being hyperbolic, but I think the point you're making is is a really good one, that it's not a replacement for a human who's a researcher.
00;15;49;11 - 00;16;10;10
Meagan
It's a replacement for, you know, potentially a tool you were using previously that you no longer need, because this gets you a little bit closer to where you want to go. And I think as a tool, when you're thinking about it that way, it becomes hugely valuable. But it can't do the piece that you're talking about. It can't do that like critical analysis or critical thinking piece.
00;16;10;10 - 00;16;14;02
Meagan
I don't know if it's going to be able to do that, but it can't do it right now.
00;16;14;02 - 00;16;37;15
Julien
I'm thinking about it in the here and now, and I'm thinking about it. You know, Eleni on my team was saying to me yesterday, so is it just a really good thesaurus? You know, like am my she was trying to come up with names for a product we're developing and she threw it eventually. Am I just generated I didn't have to use my grocery tool like is it just that and that's where it may be here now.
00;16;37;17 - 00;16;41;20
Julien
But I will also turn it back to the experts to to counter that.
00;16;41;22 - 00;17;06;00
Kathy
I would agree with that. I think I at least as of today, I would think I at best is a tool. And then just like any tool at the end, it's not the capabilities of the tool itself. It's how you use the tool is built into the experience really at the end that really makes a difference. So I think I like the literacy, like all that is basically part of the user experience as a product company.
00;17;06;00 - 00;17;40;04
Kathy
I think nothing, nothing else is more important than the user experience, that the experience can make the best use of the tool or capabilities behind it. For example, the same conversational A.I. capability or the probing capability in the quantitative setting. There's no moderator, so it's just an enhancement of the existing quantitative environment. So it's automated probing. While for a qualitative use case, there is a moderator, I am a moderator, I don't necessarily trust the bot can ask better questions than I do, but honestly, these Facebook can ask good questions.
00;17;40;11 - 00;17;53;23
Kathy
But still, I think then in their use case, the user experience should be we allow the moderator, we give moderators some options and suggestions so that it makes their life a little easier. So it's really the user experience that at the end the tool itself showed.
00;17;53;26 - 00;18;32;24
Meagan
I think, what you guys have all sort of touched on in the last last few minutes is this idea of user adoption and or at a high level, like making it easy for people to use. I like at the beginning we're talking about how great as a concept it is. And I think now what I wanted to turn a little bit to is how to make it easy to adopt some of these use cases for I am like, I guess, Cathy, because you've got conversation, the conversational chat bot in mind, how have you found sort of user adoption of that?
00;18;32;24 - 00;18;40;24
Meagan
Like is it a quick learning curve is quite challenging. Like what? Yeah, talk to me about what that looked like for you guys.
00;18;41;01 - 00;19;15;09
Kathy
It goes beyond itself with the conversational capabilities. Basically surveys can become a hybrid. I think the adoption is more about to what extent the hybrid approach can be easily integrated into their research process. So that itself, to some organizations, can be a challenge. I think behind the challenge is is the people is that I mean our industry traditionally we have been trained either as a quantitative researcher or a quantitative researcher.
00;19;15;12 - 00;19;50;00
Kathy
So at this point we don't really have a lot of researchers who love doing both, let's put it that way, at least researchers who like, really want to just look at numbers, other research who like their I just look up. We'll always be on lookouts. So people are kind of trained differently. So for us, I think the technology is quite amazing, but the work process is it creates new opportunities that the new opportunities need time for people to to to absorb into their workflow.
00;19;50;02 - 00;20;24;18
Mike
Yeah, I was going to say for, you know, for the type of applications Kathy's talking about, there's kind of like a new language as well. You know, it's a new paradigm if you've got a conversational survey is also a massive open discussion group. You know, So it's like, is it is it something that's qualitative and big, or is it a survey that's much more flexible or, you know, so people struggle to put these things in a box and even though they open up new opportunities, there's a lot of inertia and, you know, passive resistance to stuff that is different and new and you have to work with in a different way.
00;20;24;18 - 00;20;26;05
Mike
So I think.
00;20;26;07 - 00;20;39;24
Mike
There are things that are very obvious and easy to use and work with, know a lot of people I think are playing around at the moment with with some of these applications rather than necessarily embedding them, you know, in new ways of working.
00;20;39;24 - 00;20;52;26
Meagan
So Julia and Kathy, Mike, like any sort of tips that you have for people, I know that's quite tactical, but like when it comes to actually leveraging some of these tools and making it easier, any sort of tips that you guys have.
00;20;52;28 - 00;21;16;09
Joel
Point is if you can kind of give it, if you can guide it down the path of what you're looking for, whether that's you know, my general theme is, you know, if you're giving it examples in the forms of flavors that you're looking for, sort of combinations of flavors or or really, you know, driven at millennials or whatever, then you can then it starts to follow that trend and then it can kind of generate idea or answer your prompts along the lines.
00;21;16;09 - 00;21;18;02
Joel
Is that what you're looking for it to do?
00;21;18;05 - 00;21;39;00
Meagan
Yeah. So specificity is a big part, and I think that was not exactly what you just said, but I think that's what I took at a high level is this idea of like being quite specific and making sure that you're really telling them, you know, with the pattern example in mind, like telling them, I'm telling you, I like what exactly is you?
00;21;39;02 - 00;21;40;15
Meagan
You're looking for.
00;21;40;17 - 00;22;03;06
Mike
Tips. Yeah. You were saying about tips, I think just tips on on gels thing about specificity. I think I found it useful if you it's a right you know, a few times. So you know, you keep kind of going a little bit deeper or back and forth and yeah that's not quite right. I went with a notion which is just a fantastic tool for documenting everything we do in our business.
00;22;03;06 - 00;22;20;06
Mike
And it's got the the GP T 3.5 model embedded in it and it's actually got, you know, try harder or you know, rephrase this or keep going and you kind of, you know, you keep trying and refining and you'll often get to a much better place on the second or third attempt than you did on the first one.
00;22;20;07 - 00;22;23;21
Mike
So yeah, don't expect the output to be perfect first time.
00;22;23;23 - 00;22;44;16
Kathy
Actually, my personal tip is even though there is this wonderful assistant, we have to continue to improve ourselves to learn more, to be more knowledgeable of what we do. I do believe in that. I think once we know what we're looking for, there's a better chance that we can find. I can do better for us. So I just use a very personal example.
00;22;44;22 - 00;23;08;06
Kathy
English is not my first language. I always like it. There are times I know I want to say that I don't know exactly how that is, but it's not a word. It's not like I use a dictionary. I can find the synonym or something. So the way before Chatty Betty, I used a tool. There were there are a lot of tools like a paraphrasing tools, very smart tools.
00;23;08;08 - 00;23;25;19
Kathy
You just say something vaguely. This is kind of what I want to say. And then you use a tool. Then the tool will give you something else, give you ideas. So that as an example, if you let list, you know you what vaguely you need to say, you want to say, then I can help you do the next that give you some some options.
00;23;25;22 - 00;23;31;01
Kathy
Yeah. So to me the best way to use is to improve, improve ourselves. First.
00;23;31;03 - 00;23;57;20
Julien
It can generate a lot of stuff, but if you're if you're it can augment and I'd recommend playing with it to see how it augments what it does to your initial idea, to your initial prompt. But ultimately it boils down to how you interpret it, how you what you do with what it gives you. And so I would just highly recommend a heavy dose of critical thinking, which is kind of what my team and I talk about.
00;23;57;20 - 00;24;12;02
Julien
And just not taking that that initial that initial feedback for for, you know, truth, it can, it can be a yeah, just critical thinking. It's definitely a good sense.
00;24;12;09 - 00;24;15;17
Meagan
Has there been progress made with unconscious bias? And I.
00;24;15;19 - 00;24;39;23
Kathy
I think the question whether we have seen progressing, I the question is really have we seen progress in our societies? But I think the answer is yes, we have seen progress. And I you know, and when we look at data, when we look at how they cluster things, how they put a sentiment on certain certain comments, people have the bias.
00;24;39;23 - 00;25;02;12
Kathy
It's just so obvious. There are things that people just see it as a matter of fact. But because of the underlying bias, AI decides it's negative or it is positive, it's everywhere. But I do think at the end of the day, AI is trained by us, by our behavior. Just like in the good old days, people say if the kids are not behaving well, it's because of the parents.
00;25;02;12 - 00;25;06;20
Kathy
They're just they're just learning from their parents. So I think it's the exact same situation.
00;25;06;20 - 00;25;29;02
Mike
The nature of bias and how we define it is very interesting and not all slanted one way. And there are going to be an explosion of perspectives on. There will be, whether it's a good thing or not, but there will be different models that are trained to be more right wing or less tolerant of certain views and perspective, particularly in different, you know, different political and regulatory environment.
00;25;29;02 - 00;25;32;00
Mike
So it's not a simple question at all. But yeah.
00;25;32;03 - 00;25;45;07
Meagan
She's asking what are the most impressive tools that researchers can license to get the texture of qual with the scale of what I'm assuming he means is quant any other tools that you guys can think of?
00;25;45;09 - 00;26;07;10
Mike
Well, I mean, I can think of one excellent one that's represented by one of our panelists. I mean, you know, you really should check out inaccuracies if he's talking about range. I mean, you know, there are there are a few different tools like that. I mean, there's no the Chris the dove into a chatbot as a technical group solver as well.
00;26;07;10 - 00;26;45;02
Mike
So there's a few others in this space. But, you know, they really should be should be taken a look at a link. I mean, I think there are one of the things that is is doing more broadly is being able to, you know, bring a quantifying lens on unstructured data. So, you know, the broader sense, whether that's lots of, you know, free text comments from surveys or open posts on the Internet or, you know, reviews or even images or videos, you know, being able to codify that, to bring a layer of structured analysis to things that were previously, you know, structural.
00;26;45;05 - 00;26;56;01
Mike
And I think there's you know, there's far more tools in that sense. But if you're looking for something that's analogous to, you know, better than Ramesh, then look no further than.
00;26;56;04 - 00;27;09;09
Julien
One of the ways. One of the tools we've been using is Canvas AI, which shows that sentiment analysis of kind of larger scale quite So that's that's definitely a great tool that we've been able to leverage.
00;27;09;11 - 00;27;20;09
Meagan
I mean, so we have a question from Molly about just favorite AI tools in general. So she says, you know, you mentioned Jasper and Anthropic, but are there any other is anything that springs to mind from the group?
00;27;20;14 - 00;27;43;18
Mike
Yeah, I think I mean, there's there's a huge number of tools that you can use that are more general purpose. So, you know Jasper like Capito, I, you know, there are so many of these, you know, text generation, you know, a lot of them are aimed at simplifying the blog creation process or content creation. The thing that general purpose stuff, there's a lot of tools that are being built specific for research applications now as well.
00;27;43;20 - 00;28;19;27
Mike
So, you know, using some of the, you know, the underlying language models tools of JPT to summarize qualitative data. So, you know, you feed it and you upload your transcripts from, you know, interviews and surveys. You get quite good coherent summaries on the back end. I mean, I don't know if it's appropriate to kind of name names here or not, but those you know, we're actually running a session next week, a workshop on INSIGHT platforms with with a chemical called Z, which has, you know, a number of tools for that kind of qualitative analysis.
00;28;19;27 - 00;28;49;06
Mike
There's a lot of others that are going to text analytics. Like I say, you know, the Canvas II is a good tool. There's a company in New Zealand called YAML that has a summarized feature. So yeah, there's a lot actually that, you know, sort of too many really to reference. But I would say we're hosting another event on insight platforms at the end of April and will be showcased things, you know, just giving short introductions to a whole lot of new generative AI tools.
00;28;49;06 - 00;28;51;23
Mike
So keep your eye out for that if you're interested in learning more.
00;28;51;26 - 00;29;02;14
Meagan
This is a really tactical question. Julien, Have you tried putting called transcript transcripts, data from interviews or focus groups into an AI platform?
00;29;02;14 - 00;29;26;12
Julien
We did actually. We even did that a few months ago with Joel’s team. We wanted to see how close their analysis came to ours. They don't remember what the percentage level of closeness was, but it was very, very close. We charts and this was actually from online for it's not even focus groups. So there's a whole whole different different types, different pieces of data.
00;29;26;12 - 00;29;51;09
Julien
And I don't know if you want to speak to the process of how that was done, but but I was like, that was another aha moment for our team where we need to be kind of doing that a bit more frequently. There were still some gaps, but it was very, very close, just disturbingly close. As we've talked and we've talked a bit about.
00;29;51;10 - 00;30;07;06
Joel
I wanted to touch on something earlier actually, but didn't really have the the right opportunity. But we talked a bit about how I can be just egregiously wrong. You know, you can ask it, you know, what is the fifth word in the second chapter of To Kill a mockingbird? And it'll tell you it'll give you an answer, but it's not going to know.
00;30;07;06 - 00;30;27;18
Joel
It's not going to be the right answer. And so if you ask it fact based stuff like like hard, very hard trivia, fact based questions, it'll get it wrong and be egregiously wrong. But if you give it stuff that you're like, if you give a transcript and then ask it to summarize the transcript, it's not going to get it egregiously wrong because you just gave it to it and it has that context fresh.
00;30;27;23 - 00;30;48;15
Joel
And so, you know, there's areas that you can reliably trust. I although, you know, we did discuss how you have to be cautious with, you know, general fact based stuff that it might not know. It might not be in its training set or maybe not appropriately in its training set. But if you give it the data yourself, then it definitely has that information.
00;30;48;15 - 00;31;08;00
Joel
And and I would argue is about as good as a lot of humans are at summarizing Now, it's not going to be as good as a domain expert and someone who's who knows the area that they're doing. But certainly an average human like I think you can kind of think of often a lot of eyes as kind of like the average human off the street.
00;31;08;00 - 00;31;23;14
Joel
Right. So they're not going to be as good as an industry professional or as a domain expert if they're, you know, let's say you're doing something in an A, you know, the dental industry and now you have you know, people have looked at that area. You know, it's not going to be as good as that, but it'll be it'll still be very good.
00;31;23;16 - 00;31;36;05
Meagan
Joel, Julie and Kathy, Mike, thank you so much for your time today. This has been so interesting and I'm sure really helpful for everyone joining. So, yes, thank you, everyone. And we'll see you soon.