54. How You Can Use Artificial Intelligence in Market Research
00;00;06;23 - 00;00;33;16
Speaker 1
Hi. Welcome to Dig in the podcast brought to you by Digg Insights. Every week we interview founders, marketers and researchers from innovative brands to learn how they're approaching their role and their category in a clever way. Welcome back to Dig Em This. This time we've got someone internal to dig insights with us. Joel, our EVP of Advanced Analytics is joining us.
00;00;33;16 - 00;00;36;19
Speaker 1
We're going to chat air today. Joel, how are you doing?
00;00;37;12 - 00;00;38;12
Speaker 2
I'm doing great. How are you, man?
00;00;39;29 - 00;00;59;05
Speaker 1
Yeah, I'm doing pretty good. It's just saying I was actually ma this week. So this week feels particularly long, I'm sure. Also because of the post conference dinners and drinks that were attended. But yeah, it's it's been a pretty, pretty busy week. I'm excited to chat to you today.
00;00;59;21 - 00;01;01;07
Speaker 2
Great. Me too.
00;01;01;16 - 00;01;13;04
Speaker 1
So before we jump into talking about AI and how it's being leveraged, can you tell the listeners a little bit about who you are and your background?
00;01;14;09 - 00;01;39;20
Speaker 2
Sure. Yeah. Yeah. So I lead the Advanced Analytics Team a dig. I've been here for almost ten years now. It'll be ten years in a few months. And, and so I've seen a lot here. Seen a lot and and helped develop a lot of different aspects and kind of, you know, I used to used to do a lot of tech and analytics and innovation.
00;01;39;20 - 00;02;01;10
Speaker 2
And now I've sort of found the niche in leading analytics and continuing to develop proof of concept innovation type work that we can then take to scale within our organization and then scale externally to, to other clients and, and whatnot. So we can, you know, bring it even higher scale.
00;02;02;03 - 00;02;04;03
Speaker 1
And how did you end up in analytics?
00;02;05;09 - 00;02;34;21
Speaker 2
Sure. Yeah. So coming out of, you know, I always been good at math, sort of good at technology and and love pushing the boundaries of innovation and and then, you know, fortunately found dig after after a few jobs in market research and looking for a company that's really entrepreneurial and willing to listen to new ideas and willing to take risks and spend some money on undeveloped new innovation ideas and whatnot.
00;02;34;21 - 00;02;57;06
Speaker 2
And and, you know, I found found a great place with tech. And and then since then. So right now, I'm taking a master's in Applied AI, which I found extremely, extremely interesting and eye opening on, you know, all the different all the different things that we can use A.I. for. You know, lots of industries are using AI and it's very buzzworthy, very much a buzz word.
00;02;57;06 - 00;03;20;21
Speaker 2
But I would say my my opinion is it's actually further along than most people think. I think the applications are, you know, very wide. And and I think, you know, a lot of people have this perception that it's going to be scary and there's going to be, you know, robots taking over our lives. And I don't think we're anywhere near general intelligence, artificial general intelligence.
00;03;20;21 - 00;03;35;16
Speaker 2
But I think that we are the applications are not keeping up to speed with the latest developments in in AI and I think that there's a lot of ways that we can take advantage of that.
00;03;36;15 - 00;03;50;14
Speaker 1
Can you explain that to someone like me who works in marketing? And like when you say that they're further along than maybe we think they are, the applications can't keep up with the advancements. Like, is there an example you can give me of what you mean by that?
00;03;50;27 - 00;04;18;24
Speaker 2
Yeah, for sure. So I would say, you know, I think a good example is autonomous vehicles. A lot of people have this perception that like, you know, if you look at various companies that are doing it, there's there's kind of two strategies that people are approaching and with one being a lot of companies are using LIDAR, which is lasers on top of the cars to detect exactly where to go and where curbs are and where you can't drive.
00;04;19;03 - 00;04;42;20
Speaker 2
And and in those cases, they're very limited on geography because they have to have high definition maps and whatnot. And then Tesla is taking a totally different approach. And and comedy is also sort of following in their footsteps or doing the same approach, which is to go completely A.I. based with with neural networks, to really sort of mimic the human brain on actions and choices that a human driver would make.
00;04;42;29 - 00;05;13;04
Speaker 2
And Tesla's kind of taken the the bold step of of not using traditional not using sensors that every other company and, you know, every other organization is doing for autonomous driving and basically they're taking they're making this like multibillion dollar bet that they can solve it through vision only. And and and so far, you know, not everybody's tracked along with autonomous vehicles, but you can drive almost anywhere with very little intervention.
00;05;13;04 - 00;05;25;16
Speaker 2
It's not going to be perfect. It's kind of like driving beside a 16 year old who is learning to drive. But it works and it's shocking. It's shocking that it works. Yeah. And it's yeah. So that's a good example. I think.
00;05;25;16 - 00;05;44;17
Speaker 1
Oh, that's really cool. Yeah. What's the most I mean that's a really exciting opportunity and I like the way it's being leveraged. Do you have any other things that you're really excited about outside of market research that you kind of seeing happening? Maybe you learned about it in your master's or anything like that you can share?
00;05;45;18 - 00;05;54;18
Speaker 2
Sure. Yeah. I mean, there's tons of you know, there's sort of two main branches. And by the way, I should define I just for, you know, make sure we're kind of all on the same page here.
00;05;54;18 - 00;05;56;02
Speaker 1
But please do.
00;05;56;16 - 00;06;18;24
Speaker 2
Two broad definitions of AI, you know, one, one sort of traditional definition, really more used in academic circles and whatnot and less colloquially is, is to talk about, you know, this anything that's an automated system, whether it's really smart A.I. or just rule based learning robot systems. And so that's kind of all there and would include any type of automation or machine learning.
00;06;20;04 - 00;06;47;02
Speaker 2
And it's not really that exciting of a definition. And a lot of companies are doing automation in some form, like who doesn't do automation, right? And so that I'm more that kind of it's more the focus of this conversation is I think it's what people colloquially often talk about or think about when they say AI. But basically it's like neural networks, like artificial neural networks that are kind of designed to mimic the human brain and they have almost unlimited capacity to learn.
00;06;47;02 - 00;07;01;00
Speaker 2
And they're really only bottlenecked by things like the amount of data and the quality of the data. And and then, you know, we can get into specific architectures that we're we're not going to talk out here. But and there's this world excitement about and that's kind of what I want to focus on here.
00;07;02;18 - 00;07;19;25
Speaker 1
Cool. Okay. And that's that's really helpful, actually, just understanding the way that the two are defined. So in terms of and obviously the Tesla example, that's more on the neural net making URLs that our brains on the left side. Correct.
00;07;19;26 - 00;07;44;26
Speaker 2
Exactly. That's right. And then I think you expand on that a tiny bit further to is like within, you know, there's kind of two large branches within the neural network side to like that, you know what I'm calling the sort of sexy version of AI. And and then I would say that kind of falls into like computer vision, which is that the Tesla Autonomous Vehicle driving because it has to it has to look like a human using our sort of mimicking our eyes.
00;07;45;07 - 00;08;10;24
Speaker 2
And then another big branch is natural language processing, which is kind of like using our our ears and our mouth to really understand and use language. And that's more of the focus that, you know, that we do with Digg and with and and using that, you know, there's a ton of applications that we can use to take advantage of the latest progress in natural language processing.
00;08;10;24 - 00;08;25;01
Speaker 1
Very cool. So if we bring the conversation of I like if we contextualize it within market research, where do you think the biggest opportunities are to leverage A.I. within market research?
00;08;26;15 - 00;08;52;01
Speaker 2
Yeah, I think, you know, broadly speaking, I would say natural language processing, which is just a, you know, a huge umbrella term. Yeah. And and then within there, there's all kinds of applications, whether we're talking about, you know, the traditional traditional applications would be things like sentiment analysis and verbatim coding to read raw text that people write in unprompted in a survey.
00;08;52;11 - 00;09;28;13
Speaker 2
And and I think, you know, there's some excitement there, but that's also sort of more traditional and and and less exciting. And the more recent advances allow for things like abstract of summarization, of open ended. So what that refers to is you can get A.I. to read, you know, a whole book or, you know, read it, read, read various articles and then and summarize them, you know, using completely new like writing its own language, not, I mean, still English, but writing in unique sentences and not just copying and extracting sentences from it.
00;09;28;13 - 00;09;32;19
Speaker 2
And that's pretty exciting so right applications for for our questions there to.
00;09;35;00 - 00;09;58;14
Speaker 1
So in that case then basically what you're saying is if someone was responding to an open ended question on a survey like on upside, say the AI could be used to kind of synthesize all of the different responses and kind of pull out the key learnings or key points of view or opinions about what you mean.
00;09;58;26 - 00;10;20;18
Speaker 2
Yeah, exactly. So it can it can, you know, instead are traditionally what market research researchers often do is they get, you know, someone, they call it a human coder to read through everything and then come up with themes. So you might say, you know, a lot of people, you know, 30% say flavor and then 25% say, you know, their position in, you know, the environmental position and and other things like that.
00;10;20;18 - 00;10;50;02
Speaker 2
You know, if they're answering a question and writing an unprompted text and then you can get an API to read everything you know, and then very quickly, like automatically very, very fast and just summarized all of that and and just right, right. You know, whether it's bullet point list or paragraph form or whatever, you can kind of ask it to do any, any format and summarize that text to do the same, same type of thing and do it in seconds.
00;10;50;02 - 00;11;12;17
Speaker 1
Very cool. So yeah, we talked about natural language processing, text summarization, automated, verbatim coding, anything with like the output side like in terms of like writing reports or key findings. Are you seeing anything in that space?
00;11;12;28 - 00;11;47;04
Speaker 2
Yeah, exactly. I mean, we can we can be doing things like, you know, writing insights based on a slide or producing slides with automation, other forms of automation. Another exciting area that we're doing, proof of concept working work with as well is is idea generation. And so with upside, as you know, we do a lot of work in testing our clients ideas and and to be able to extend that a step even further earlier in the in the lifecycle for our clients would be to say, you know, you give us some some sort of things that you're working on.
00;11;47;04 - 00;12;08;19
Speaker 2
You know, we can we can we can write down we can get our clients to write down, you know, what they what the product, you know, if they want to test claims for products and they can say, you know, these are the things we want to make sure that we are touching on trust and reliability and and like they can give us some adjectives like that and then we can get the A.I. to write a bunch of different taglines or claims that the client could make.
00;12;08;25 - 00;12;28;23
Speaker 2
And we could and they can. I can do it. Market specific and geography specific. You know, whether that's in, you know, in the U.S. or Canada or, you know, UK anywhere, and it'll it'll be able to pull in different aspects of those geographies and and and local variations as well.
00;12;28;23 - 00;13;03;00
Speaker 1
Yeah. And to contextualize this further, like I'm really familiar with the idea generation function because Joel essentially pulled me into a meeting with one of our founders and was like, you cannot do all of these, these cool things. I didn't even realize that your team was working through so much of this. So we actually used the idea generation tactic that the A.I. for that, we used it in one of our marketing studies because I needed to come up with 20 different chocolate bar flavors and 20 different potential hamburger flavors.
00;13;04;00 - 00;13;31;13
Speaker 1
And I think it took like 2 seconds something. Yeah, like it was so fast. It was crazy in terms of how quickly you get it back. And it was really interesting. Like I had never thought of any of those ideas. I think one of the one of the ideas was like a s'mores s'mores cheeseburger. So I understand why I didn't think of that, but but a few of the others, I was like, that sounds pretty tasty.
00;13;32;12 - 00;13;54;22
Speaker 1
And now they were brand new, really cool ideas. So if anyone wanted to contextualize, like, what does that actually mean? If I'm doing ideas feeding or anything like that, like this idea generation tool can actually kind of act almost like another member of the team when it comes to coming up with new ideas. Yeah, super, super cool.
00;13;55;19 - 00;14;21;11
Speaker 2
I think at a broad at a high level, the way I try to explain it, sorry to cut you off is, is to think of it as basically if you can explain to a human what you're looking for, you can like in natural language, just with regular, you know, not a computer code or anything special. We can we can we can get we can give that same instruction to the to an AI and it can respond like a human would.
00;14;21;11 - 00;14;52;10
Speaker 2
And and so there's sort of the secret sauce is in, you know, how much, what type of context you give it. And you want to like, you really want to fine tune in and, and, you know, give it to extra training on a bunch of examples and then get it to really, you know, drill into that framework, you know, if that might make sense, if you're really going very narrow into a specific domain where you want it to really take advantage of context, specific information, then you can, you can you can give it all that background and then say, Now give me these ideas.
00;14;52;10 - 00;15;18;15
Speaker 2
You know, like this is an AI, this is an up and coming category where it's taking advantage of these trends. Like the more background you give it, the more the more it goes. Okay, now I'm kind of starting up here and you get narrow, narrow, narrow. Now I really understand the kind of examples or the kind of things that you might want to take it or kind of help with that you want to get out and then it can get really hyper specific with that and give you really rich results.
00;15;18;15 - 00;15;48;19
Speaker 1
Yeah, it's really it's so cool and also so far from my area of expertize, but it just fascinating in terms of what you kind of are coming up with within your team. I think it's probably actually worth us talking about the way that I, I don't know the way that you work with the other teams within Tech. You guys are almost like like an innovation hub, I guess.
00;15;49;19 - 00;16;14;07
Speaker 1
I don't know how it's almost like a squad within the company where you're coming up with brand new potential use cases for analytics and different ways of of looking at data. Can you talk about kind of the process of like how you figure out what you're working on, obviously that sits outside of the client projects that you have all the time for sure.
00;16;14;07 - 00;16;44;26
Speaker 2
Yeah. So so as you, as you kind of touched on at the end to my team, does a combination of client project work and an innovation and proof of concept development. And, and so that really allows us to really stay on the cutting edge or stay up to date and track along with what clients are looking for and what they're what they're asking us for, and also like how we can solve their problems, maybe not directly exactly what they're asking for, but really get at the the question behind the question in a way.
00;16;45;06 - 00;17;03;13
Speaker 2
And and so that's the kind of stuff that we work on in terms of project work. And then while we're doing that, we kind of, you know, have have always maintained, you know, we've grown the team more than we need to in terms of project work so that we can focus on our development, innovation at the same time.
00;17;03;13 - 00;17;22;15
Speaker 2
Because that's really where a lot of the you know, that's really where we can help grow the company and and build that build value for our clients and speed up projects and do more automation and develop new techniques. And everything is, is when we can, when we can actually spend our time doing our our development and innovation. And so that's kind of what we do.
00;17;22;15 - 00;17;38;15
Speaker 2
And then we work with other teams internally. You know, we develop a proof of concept type of idea and then prove it out, put it in PowerPoint, show it to the team, show it to our internal stakeholders and say and get feedback. And then we can say, okay, so this is like what we've developed now. So how do we take this to the next level?
00;17;38;23 - 00;18;00;18
Speaker 2
You know, maybe we have eight ideas recently. Did this with an had came up with about eight ideas and we said, well, what can we what can we really build out and where where is it worth investing more time and energy and resources into and and and then so then we start working with other teams to say, okay, now we've got to, you know, this is the proof of concept that a very high, you know, 30,000 foot level.
00;18;00;25 - 00;18;18;27
Speaker 2
Now we want to drill down into like these two ideas because we want to actually put them into our our platform. We want to build them out maybe internally first, where we can automate so that anyone on our internal, you know, the 200 people that work at Digg today could use it. And then we say, then how do we bring it to our clients and really scale it up even further?
00;18;18;27 - 00;18;24;12
Speaker 2
So we kind of work with different teams to do each of those parts. But that's that's kind of an overview of how we work.
00;18;27;18 - 00;18;49;29
Speaker 1
And how do you decide, you know, you talk about the process of kind of like whittling down the list of, you know, where do we want to spend our time? I've always worked in tech companies where the product teams will do things like ice scoring or like it's based on impact and effort for the development team. And I know that on the product side, with that side, we do do quite a bit of that.
00;18;50;22 - 00;19;09;10
Speaker 1
How do you gauge whether or not you're going to take ideas forward? Is it based on how interesting something is to, you know, senior leadership? Is it based on I'm sure it's based on lots of things. But yeah, what's the process there of figuring out what has to wait and what kind of gets pushed along the quickest?
00;19;09;24 - 00;19;34;14
Speaker 2
Yeah, for sure. So for me, I kind of work on what I think would be, you know, useful to the organization and what I think that we can kind of crank out and, and build a proof of concept around with our like, current, like with our capabilities and, you know, you know, extending beyond capabilities, obviously, to, with, with, with everything that we're doing.
00;19;34;14 - 00;19;54;26
Speaker 2
And then and then once we've kind of got some proof of concept, then we talk internally with different stakeholders and say, we kind of have this voting process that you're probably familiar with, where we look at, you know, what is, what is everybody think will be how impactful will it be to our platform? And, you know, how much effort will be, will it be to to build out further?
00;19;54;26 - 00;20;10;15
Speaker 2
And and do we think applications will come up along the way as we as we scale it up and and that kind of thing? So we kind of have this, you know, everybody kind of gets in their opinions and we sort of have this sort of wisdom of crowds way of deciding, okay.
00;20;11;08 - 00;20;23;03
Speaker 1
That's really cool. Is there anything that we're working on internally that you feel like you can mention that might be coming out, you know, in the next year or so that you're super excited about?
00;20;24;22 - 00;21;08;11
Speaker 2
And I feel like I've kind of touched on some of the some of the main stuff, but I think, you know, there's a lot there's a lot that we can do with with the latest large language models. And and I think I think idea generation is really exciting. I think summarizing our opening verbatim is really exciting. I guess one thing that I would add too is that like, you know, some of the work that I've been doing just in the last, you know, month or so has been on, you know, can we do you know, because it's great if we come up with a whole bunch of new ideas, but if clients aren't adopting them
00;21;08;11 - 00;21;31;17
Speaker 2
at the rate that we, you know, think naturally makes sense, then, you know, sometimes sometimes they, the the their stakeholders are looking for the results in terms of X, Y or Z, traditional metrics and whatnot. And so, you know, those traditional, traditional metrics that we talked about that I mentioned earlier was the thematic coding for raw, verbatim, unprompted responses to questions.
00;21;31;29 - 00;22;04;08
Speaker 2
And, and so I said, well, how can we take that framework and not change it but sort of make our AI work within that framework? And what I found we can do is with some, you know, some sort of sophisticated analysis and and and pipelines, we can we can we can we can build a fine tuned, large language model by by taking, you know, some example themes.
00;22;04;08 - 00;22;35;22
Speaker 2
So if someone can go through and then code a relatively small number of themes that might take them, you know, five or 10 minutes, then then we can take that and then we can see that into our our algorithm and then let it learn from there. And then and then propagate that to the rest of the, the rest of the raw data and and then come up with how, you know, maybe they give us a fairly small number of, you know, and associate, you know, these various comments with these themes.
00;22;35;29 - 00;22;59;10
Speaker 2
And then we can take that and then predict it into the rest of the the whole, you know, a thousand respondents and then get those results, you know, very quickly. Again, you know, in seconds or minutes. And so that's kind of an exciting area. I mean, it's exciting because, you know, I'm I'm just I'm I'm constantly shocked at how accurate the model can do and how we can learn on just such a little data.
00;22;59;25 - 00;23;13;17
Speaker 2
But I'm also it's less exciting because I like the stuff that's really game changing and and not necessarily fitting into the traditional market research section. So there's kind of, you know, I like to work on both both sides of those.
00;23;13;17 - 00;23;38;04
Speaker 1
Yeah, that's really cool. I mean, you kind of touched on it there but with the idea generation tall like how good are the how good are the outputs? Do you find typically that they're I mean, I found them to be to be quite good, but you probably have done far more tests than I have. Do you feel like they're they produce a good assortment of ideas?
00;23;38;13 - 00;23;58;12
Speaker 2
Yeah, for sure. I mean, they we can get we can get it to be as specific as we want it to be as we give them more and more information. Like I was talking with before and we found that, you know, we've we've actually test them in upside in an ideal score where we where we where we take some some of our clients ideas.
00;23;58;22 - 00;24;19;18
Speaker 2
And and we did a demo with some of our clients ideas and then put in some AI driven ideas. And some of the ideas actually outperformed like the client's ideas. And so, you know, it's pretty, it's pretty incredible. If we can we can we can get it to generate those ideas and then validate them with real respondents and then find that they, you know, some of them actually and performing.
00;24;19;29 - 00;24;36;00
Speaker 2
And so, you know, there's a number of ways to do this. You know, for example, you could always throw in a handful of AI, generate ideas based on the rest of the content, and then just test it within. You know, I don't think, you know, that's would something I would love to do. I don't think our our product team is going to be, you know, doing that anytime soon.
00;24;36;01 - 00;24;52;26
Speaker 2
It's it's kind of taking a step in it, going a bit out there. But, you know, I would love to do that kind of thing because I think, you know, we're just adding value without really, you know, costing the client anything. And then maybe we do generate a new idea that that they wouldn't have thought of otherwise.
00;24;52;26 - 00;25;22;11
Speaker 1
Yeah, that's super cool. And also, like such a compelling reason to leverage AI as a client. If you know that ideas are that are generated by the are actually outperforming the ones that you've kind of brainstormed internally, that does kind of lead me though. I mean, I feel like we can't really talk about AI without talking about people's perception of AI or what it might do to their jobs or yeah.
00;25;22;11 - 00;25;47;07
Speaker 1
To the way that businesses function these days. I mean, when it comes to AI, in your mind, should people be concerned about it kind of replacing them within the world of market research or does it just mean that, you know, some people are focusing on different things, like what's your what's your viewpoint on that?
00;25;47;07 - 00;26;13;17
Speaker 2
Yeah, it's a great question. I mean, my my take on that is as we've come as as our industrial societies come up with more and more tools and more technologies to make our life easier, you know, we have not had jobs just fall off to just fall away. We've we've we've often seen new jobs because as our as our productivity increases, the need and the demand increases as well.
00;26;13;25 - 00;26;26;11
Speaker 2
And so I don't have any concerns with that. And I think that I think that the jobs will not be going away. I mean, I think there's kind of like a short term and a long term look. And, you know, I'd love to have a crystal ball and tell you what it's going to look like 20 years from now.
00;26;26;11 - 00;26;49;00
Speaker 2
But it's, you know, very, very hard to say long, long term like that. But and yeah, I don't really see massive shifts happening in the short term, at least, because, you know, even I think age is farther along than people think in terms of like how we can use it today. But I don't think that it's near general general intelligence.
00;26;49;00 - 00;27;14;14
Speaker 2
And that's really where, you know, people might have costs, drastic cuts if we can just train and AI or businesses to build to build each other. And then, you know, it just gets out of hand, obviously. But that's no, we're nowhere near at that point. And and I think historically, whenever we look at new innovation and new developments and things like that, we're not seeing, you know, it's often creating more jobs and then go away.
00;27;14;14 - 00;27;34;28
Speaker 1
Yeah, I mean, that is good to know. Makes me feel as someone, you know, talking to someone who knows a lot about this, this kind of stuff. It's good. It's good to know because it's. Yeah, a little bit a little bit comforting when you hear about how sophisticated some of some of the AI that you're that you're talking about is.
00;27;35;08 - 00;28;08;17
Speaker 1
I mean, to close out because we're almost out of time here. Where do you think it went in 2 to 3 years time? What do you think will be the norm when it comes to it? It doesn't even have to be in market research. It could be in market research, but then also externally, like the norm when it comes to how air is integrated into our day to day or into market research tools like what will what would, what would be strange now but won't be strange then I guess is what I'm trying to ask.
00;28;10;01 - 00;28;39;16
Speaker 2
Yeah, good question. I mean things like, you know, people are using digital assistants already, you know, in our everyday lives like Siri and Alexa. Yeah, you know, Google and whatnot and, and I think those are those are incredible. And I think they're vastly sort of under utilized or not. I mean, I think those are limited based on we haven't talked ethics yet, you know, and I don't think we have time because it's a huge it's a huge number.
00;28;39;27 - 00;28;59;19
Speaker 2
But I think I think the ethics are really and rightfully so. But I think ethical considerations are holding back to a considerable degree and rightfully so. In fact, I have you know, there's a lot of a lot of considerations and nothing is nothing's insurmountable. And I don't think at all. But it's it's just important to make sure we get it right.
00;28;59;28 - 00;29;25;16
Speaker 2
And but I can I can definitely see once we've had more more resources put into the ethics side of things. And I think that's a locking area. Like I just said, once we've had more resources put into that as kind of an overall industry, I think we'll start to get more. We'll be able to develop new products like a digital assistant that can do a wide array, wide, a wide variety of things.
00;29;25;21 - 00;29;49;03
Speaker 2
Like right now, if you ask Siri to do X, Y or Z, you know, it can do fairly templated things. But I definitely think that we can take that a bigger step further in the future. And I think that that's something that will probably evolve faster than other aspects or at least touch people's everyday lives. You know that I mean, I already talked about autonomous vehicles, which I think are are coming sooner than people think, too.
00;29;50;03 - 00;30;01;02
Speaker 2
I mean, it's pending regulatory approval and whatnot. But yeah, I think I think digital assistants are an area ripe for innovation to continue the innovation.
00;30;01;02 - 00;30;18;28
Speaker 1
Okay, you can't tease that and then leave us hanging. So when you talk about ethics within I like you said, there's loads of things to consider or try and give us a calls, notes version. What should what are people considering and why is I being held back? Because of ethics.
00;30;19;18 - 00;30;43;28
Speaker 2
There are I mean, I think so right now. Like there's a lot of language. You know, I talked a lot about natural language processing. And an example would be, you know, when you when you train an AI to learn how to read and write, for lack of, you know, better colloquial terms, then it reads based on what it learns, based on what the information that we give it.
00;30;43;28 - 00;31;04;09
Speaker 2
And so when you give it the Internet, there's a whole bunch of stuff on the Internet that is, you know, because if you're giving digital text, the the Internet is obviously the place to go. We also digitize books and Wikipedia and other curated sources and then do our and then, you know, and so anyway, it's learning from the way people speak.
00;31;04;09 - 00;31;40;00
Speaker 2
And often when when you look at how you know, when when companies like Open AI and Google and Amazon create these large language models, they they test it for how it learns to treat different people, groups and whatnot. And just a really simple example is when you tell it, like when you test it by saying you type in E was very and then you say dot, dot, dot and fill in the blank and then you're you're telling it, you know, he as a male was very and then it says things like, you know, tall and and and lazy.
00;31;40;00 - 00;32;04;10
Speaker 2
And there's just interesting, you know, you know, positive and negative connotation and adjectives that he uses to describe a man versus a woman. And so there's really big considerations on on how you apply that type of, you know, a AI. So that because if you have just, you know, anyone's anyone's kind of like writing a book, you say use an AI to write a book.
00;32;04;10 - 00;32;26;00
Speaker 2
You don't want it to sort of propagate all these all these sort of negative aspects of humanity that it's learned in the in the in the process. And so that's that's a that's a big consideration. Oh, that's kind of a well-known consideration that everybody's talking about. And I think another interesting example that I've been thinking about lately, too, you know, I just got on Spotify from Apple Music.
00;32;26;00 - 00;32;52;23
Speaker 2
I, you know, I'm a late I'm a late switcher and and I'm on Spotify and I'm finding the recommendations are incredible. Like the like, you know, I put on some music that I really like and then it'll come up with new recommendations based on that and say, I driven. But what I found is that it's driving. It knows that I want to listen to, you know, let's say I'm listening to a band and then it says, okay, well now, you know, we're going to recommend a collaboration between the band that you're listening to and another band that's beside it.
00;32;52;23 - 00;33;15;18
Speaker 2
And on face value. There's nothing wrong with that. And I'm not taking a huge stand and saying there's something wrong with that, except in in a way I am, because what it's doing is it's causing a problem to creativity, because then what you have is it learns that, okay, well, if you iterate instead of making leaps instead of, you know, a new genre, it's going to continually iterate within and there's potential for it to stagnate.
00;33;15;19 - 00;33;34;11
Speaker 2
First it says, it says, Oh, I found that, like, if I keep feeding you this type of stuff, it's the same ideas echo chambers in in social media when you get onto Facebook and Instagram and, and all these things, right? You get you just turn into this echo chamber because it finds that, oh, well, you really like the same type of content that you continue to look at.
00;33;34;11 - 00;34;00;21
Speaker 2
And then that's a whole other, you know, that's another massive ethical consideration. And so I think there's there's really important, you know, ethics, you know, when I started diving into, I didn't think that I'd be excited about ethical considerations. But it's it's extremely interesting. And I think they're they're very important to to consider. Fortunately for me, a lot of our applications I dig with in market research are not, you know, as as potentially.
00;34;00;22 - 00;34;05;27
Speaker 2
Right. You know can concern me. Yeah exactly as it could be.
00;34;05;27 - 00;34;06;18
Speaker 1
An example yeah.
00;34;07;12 - 00;34;19;06
Speaker 2
So you know, I mean there's definitely considerations and and we take it very seriously but you know, it's it's I wouldn't say it's in the same level playing field as, you know, the echo chambers on social media are are affecting our society right now.
00;34;19;19 - 00;34;30;26
Speaker 1
Too. All this has been so interesting. Thank you so much for spending the time with me today and I guess I'll see you in a meeting tomorrow. So great. We'll see you really soon.
00;34;31;14 - 00;34;31;28
Speaker 2
Is great.
00;34;31;28 - 00;34;44;00
Speaker 1
You're talking thanks for tuning in this week. Find us on LinkedIn at Digg Insights. And don't forget to hit subscribe for a weekly dose of fresh content.