Welcome to For What It's Worth, a podcast from Raymond James designed to help you plan, invest and live smarter. Lenssen: Hi listeners, and thanks for joining me. I'm your host, Paige Lenssen. We're glad to have you with us. You can find this episode and more For What It's Worth on Spotify, Apple Podcasts and RaymondJames.com. Today's topic is an exciting one. I've been personally looking forward to it. We're talking all about ChatGPT and generative AI, artificial intelligence. Earlier this year, ChatGPT became the fastest consumer application in history to hit 100 million users. We're going to learn why it's popularity has been so explosive, what efficiencies it might unlock for businesses and consumers, and what related risks may still need to be worked through. I'm pleased to introduce our featured guest. He'll be sharing his individual perspective on this technology. I'm joined in the studio today by Raymond James IT Head of Innovation, Kemal Kvakic. Kemal, thank you so much for speaking with me today. Kvakic: Paige, thank you for having me. It's a pleasure for me to be here and talk to you about this interesting technology. Lenssen: I'm excited to jump right into this. It is a really engaging topic. Can you get us started with a little bit of an overview? What is ChatGPT and the technology behind it? Kvakic: Sure, let's start with the basics, right? So ChatGPT is generally a chat solution that's available either on a website or on your mobile app or the application. On the surface, it looks like just something we've used in past, but behind the scenes, it is truly a lot different, right? So if I was to search that same question on internet, I would probably get an answer something similar to, "ChatGPT is generative AI that uses large language models, or LLM, and it provides human-like response to the questions you ask it. GPT stands for generative pretrained transformer, which in the machine learning world and field is a transformer model in neural networks that learns the context and thus the meaning behind the relationships in data." Now, to many of our listeners, this really means nothing, right? But this is where the beauty of ChatGPT comes in. This is where the ChatGPT is different to what we've seen in past. It has an ability to play different roles. So if you were to ask the same question to ChatGPT, saying, explain yourself or explain what ChatGPT is and you get this answer and you read it and you say, well, I really didn't understand any of that stuff. Can you explain to me in a different form? Let's say you want to, hey, I'm a financial advisor. Explain it to me like I'm a financial advisor. Or I'm a developer or I'm a lawyer, or even better, I'm an eight-year-old. All right, I mean, let's put a story behind it and explain to me what you really are. The response you will get is more words that are really not explaining much but give you the context. So response might be something like, "ChatGPT is like a really smart computer that can talk to you, that understands you, that knows what you're going to say. And you can talk to it back and it will give you a human-like response." Right. So it's completely different to what I just explained to you and that is true power of ChatGPT. Now, a couple of things and interesting facts, of course, it runs off of artificial intelligence. Artificial intelligence has been around for 30+ years. So this is not really anything new here in this space, but a couple of fun facts. It runs off for 2021 data. Some people don't know that or don't realize that, so there's, anything that happened post 2021, they will probably not get the answer or maybe not an accurate answer. And it runs off of internet data. So it has, one of the main sources is internet data. It's been founded by a company called OpenAI, and they founded this in 2015. A lot of billionaires came in and said, let's put something together. There's a lot of them I think people to worth mentioning is Sam Altman, Reid Hoffman, Jessica Livingston, there's quite a few, and of course, Elon Musk as well. He has resigned from his post. There's a lot, if you search for that, you'll hear different responses into why. But the rumor has it that he is actually trying to come up with his own version of ChatGPT and compete with this technology in this space. And then one really interesting, for me, but for many, maybe scary fact could be that engineers that built this technology really don't understand sometimes why it gives the answers that he does. So I do hope through our conversation today we'll be able to explore this technology further and I'll be able to answer more questions. Lenssen: Let's talk a little bit more about the difference between this and what we've seen before. I happen to be chatting with my parents about this upcoming episode and was excited about it and they hadn't heard of ChatGPT. So I tried to just explain the basics to them. “You can ask it whatever you want and it'll answer your questions and you can get into detail.” And their first response was, oh, that sounds like Alexa. And I was like, well, it's more than that, but I think there may be some listeners who I think the same thing. This sounds like search engines or chatbots that I'm already familiar with. How is ChatGPT different? Kvakic: So you're absolutely right. I think without going into really underneath on how the engine kind of runs and how this technology runs, I will say, fundamentally, it's the way you interact with your traditional chat capabilities, be it Alexa or anything, or Siri or even a website, when you go to your banking account and you see a little chat capability. It's the way the data and the answers are being provided to you. A lot of current, traditional chat capability has this thing called intents. So we, as developers, would have to create these intents and predict what are you trying to ask it and then match that to an answer behind the scenes. So it's almost like a relationship data that we have to maintain. So if you ask it, let's say you're asking a question of moving account data from one account to another account. If I didn't code it to recognize the intent of your question, you probably - and you listeners and I know I have as well - will probably get an answer, "I'm sorry, but I don't understand." Well, you don't understand it because there was no intent behind it. We did not expect that question to come up. So that's one. The second piece is, how does the data relate to how we get this data? In regular, traditional, even Alexa, if you ask it a question, it will go and find the text that it finds on the internet, it will read that text verbatim. Now, flip that around. ChatGPT doesn't have that. ChatGPT takes a lot of data that's available on the internet, summarizes it, and gives you the artificial intelligent response to conclusion of what it learned based on the data that it had. So that's the major difference is the ability to, and I mentioned earlier is the ability to interact with it and say, I did not understand that. Can you generate different response to me or can you give me in nontechnical terms what that really means? And so you can go back and forth and there's a lot of interesting things where people actually writing movies and plots, which ChatGPT being the second actor behind the scenes and generating this text. So capabilities, of course, from behind the scenes, drastically different than what we're traditionally used to. Lenssen: What do you think has made it grow so fast? We saw just in January of this year, 2023 it passed 100 million users, fastest-growing consumer application to date. What has made it so popular so quickly? Kvakic: Yeah, I think I mentioned earlier. So of course, it's all about AI, right? So AI has been around for a long time. Other companies, big companies, Facebook, Google, Apple have tried this in past as well, and they still are. There's, there's a lot of products that they are trying to release now. What really made it so popular is ease of use. To us, IT that have been in this space when it came out, yeah, it was great. It was cool. But a lot of us knew the power of what it was trying to accomplish. Non-techie users, this was the first time they've seen the power of AI. It was mainstream as well. Media covered it pretty well. And I think those are the kind of the top two in my opinion on why became so popular. But as I mentioned earlier, other companies have tried to do this. There was a lot of the rep behind the scenes on, how biased is it going to be, and there's a lot of stories behind this on a lot of big companies getting in trouble because they've been producing wrong results, or categorizing wrong images. So there's a lot of PR issues with it. And OpenAI has spent a lot of time and energy to govern and police the outputs so they don't get in trouble from a PR perspective. And let's face it, OpenAI is a fairly new company in grand scheme of things. And so they didn't have a lot to lose in this space as much as other bigger companies do. I mentioned earlier, right now, we know Bing is using ChatGPT in their answers. Google has something called Bard that they are playing with and trying to release. And so as you can see, movement is happening. But yeah, it's, it's mainly just ease of use. Mainstream non-techie users being really shocked and surprised on how powerful this tool is and hype. But there's 100+ different companies now in this space. So we'll see what it's going to be in six months right now is there still going to be ChatGPT or yet another product that's even better than that. Lenssen: You mentioned there is so much buzz around this, both with just average users and among companies. We saw just as one example, Microsoft announced that it's going to be incorporating this technology into its Office Suite, which so many of us use - Outlook, PowerPoint Word, Excel. In what other ways or in what other industries have you seen companies expressing their interest in making use of this? Kvakic: Yeah, you're right. So Microsoft is another maybe fact about OpenAI and ChatGPT, it's a 40% investor in this technology. I believe it's about $10 billion investment that they've done so far. And since then they actually released something called Microsoft 365 Copilot, which we at Raymond James IT are actually exploring this further. And so big players are in this space and they definitely, I foresee almost all of our products, not just Microsoft products, having some level of ChatGPT capability that we're gonna be able to use. Now from an interest from maybe firms that are in the same space as Raymond James, There's definitely efficiency gains. How can we use this technology to make our clients more efficient? And that's definitely something that we in IT are looking at this technology as well. Can we take certain procedures? Can we take certain documents or texts and contexts, can we summarize that and provide the answers faster? We heard some big firms in our sector as well providing this to their financial advisors as well on similar concepts around training these models, training these robots, instead of going against the internet, which I mentioned earlier 2021 data. But instead you can refocus that and say, Let's not go against the internet data. Let's go against my own data and summarize this data for us. So there's a lot of movement, but honestly from an industry perspective, everybody's looking into this. It's not just financial or health care. They've been talking a little bit about research in cancer research institutes and how can they take these results, that ChatGPT, and feed that into ChatGPT and see if they can make sense of it because there's so much data out there. And so I feel like there's gonna be a lot of movement in this space. I think there's gonna be a lot of use cases. We at Raymond James are partnering with our business partners to see how they can use it, where they are using it today and how we can integrate that in our services as well. So time will tell, but it's early on now, but there's a lot of really potential and a lot of use cases. And as I said, a lot of products will have that integrated into solutions. I'm excited about Microsoft 365 Copilot for sure because writing those V-lookups in Excel spreadsheet, even me as a developer, I still have to Google it to figure out how to do it. And I feel this technology will make it happen for us where we'll be able to plain English, say, create me a VLookup against this worksheet on this file and this column and this column, and it will just do it for you. Lenssen: So that makes me feel better knowing all the times that I Google, reminding myself how to do V lookups and different things in Excel that I'm not the only one. Any time we hear the words together of “technology” and “efficiency” and “artificial intelligence,” the question tends to come up: is this going to replace jobs? Is this going to take the role that workers are already doing? How do you see that playing out? Kvakic: When I joined Raymond James five years ago, I was brought in to stand up robotic process automation, which we have successfully running for five years now. That was number one concern that everybody had, is, will this replace my job? In five years now, we replaced zero jobs. Because any technology, I mean, I'll go back to long time ago, Industrial Revolution, when it happened. That was the same thing where people are saying is, well, there goes my job because now it's all steam engines and stuff like that. Technology, even though it becomes scary, technology should be seen in light of upskilling. I do see that there will be certain tasks that there's some analysts that are coming out and saying 7% of jobs will be eliminated. Maybe the work that you do on daily basis will be eliminated. But that will give you an opportunity to upskill yourself to learn, to do something different, to do something more thought-provoking. Another analysis says that because of this, developer productivity will be 50%, to user productivity to go up by 50% because you've got more time to maybe take take that certification that you wanted to take, but you couldn't because you were too busy. So there really is, I feel, my personal opinion, that there's no risk when it comes to losing a job. I think this will be a positive impact to us all by giving us more time to educate ourselves, to learn something new and to upskill, to upskill what we have today. Lenssen: How prevalent do you foresee this kind of technology, ChatGPT or the similar versions coming out, how prevalent do you see it being in our everyday lives in the longer term? Kvakic: Yeah. I will say AI in general is here to stay. There's too much already invested in this. And so I do foresee we talked a little bit of Microsoft 365. I honestly do foresee that some time in the near future we'll have some level of almost personal assistant, be it in our professional or personal lives. I know I use a version of ChatGPT in my personal life as well and professional when I can. So I do feel with ease and exploration of this technology further, this will, this isn't, this is here to stay, it's not going away. It's almost like back in the day when flip phones came out and people refuse to take iPhones - they think this was outrageous and it won't last long and look at us all, barely anybody has flip phones anymore. So this is in the same spectrum, AI, generative AI, ChatGPT and likes will stay around and it will have definitely impact in both professional and personal lives. Lenssen: Let's talk about some of the risks and the unknowns. I mean, this is relatively new and still developing in many ways technology. So let's dig into some of the things that are maybe still being figured out. Who, if anyone, is responsible when it comes to the accuracy of answers coming out of GPT? If it's basing its responses on what it's finding on the internet, we know the internet is full of both accurate and inaccurate information. Is there a reliability check of some sort? Kvakic: Yeah. I think ultimately to your first question is, I think ultimately it will be us, right? Even now if you go in ChatGPT and ask a simple question, will Elon Musk purchase Twitter? I told you earlier that it's based off of 2021 data. First time round, this was two, three months ago, the answer was pretty simply, "No, he will not purchase Twitter." If you ask it now, the answer is, "Well, my data is based off of 2021 and there's no indications that Elon Musk will purchase Twitter." As you can see, it kind of evolved in how it gave me the answer. I think over time it will be us using this technology that we will pretty much be able to tell it, "that answer is incorrect" and ask for different answer. So that's called self-learning. Supervised learning, I think there's gonna be companies and I said already, OpenAI is working on this, where they will try to tweak their models to be smarter, to be able to maybe do different prompts and maybe even them filtering out some of the data based on the sources. Maybe there's a source that they really don't trust and provides wrong information. Wikis, publicly known domains where a lot of people can write their own opinions. And so that might not be a good source for chat bot, for example, or ChatGPT to tap into. So I think over time the thing itself will hopefully fix it. But I think in general, your second question, there's always going to be trust issues with this technology. Just because it is based on the data that we humans enter. Right at the end of the day, it's all about the data that it finds on the internet, and who entered the data on the internet? It's us. Maybe over time as technology evolves, there's already a lot of companies that are trying to fix some of this stuff by building a counter tools to fact check or whatever it is that it will improve, maybe get integrated. But at the end of the day, it's going to be up to us and how we look at that technology, how we'll look at those answers and how much do we really trust those answers ourselves. Lenssen: It's somewhat related, but I think liability goes hand in hand with that. You know, if, if a user were to ask maybe a question about a more serious topic, something health-related, something financial related. And the advice that it gets ends up being not to that user's benefit. Is somebody liable for what came out of it? It's AI, it's a robot. Kvakic: Yeah. So I think it boils down to that it's a robot, right? And I think a lot of these companies, OpenAI, have expressed their concerns when it comes to liability. I think there's going to be a lot of fine print on all of these applications so that you don't fully rely on it. And again, understanding the technology underneath and how it gets this data is going to be crucial. I always say use this as your writer's block. Don't use it as ... use this to get you started. Don't use it for finish line. That should be you. Be it writing emails or whatever else you're using this technology for. And as long as we are of that mindset where it's not getting us to the finish line, but getting us started, I think we should be we should be okay. We should be in good shape there, but definitely liability is a big concern. Lenssen: What about privacy? I'm thinking of users who are maybe using ChatGPT to assist them in their work and say somebody enters private, confidential information looking for an answer. That's now been fed into this technology, is there a risk of it being disseminated in a way that it shouldn't be? Kvakic: Absolutely. I think that's one of the biggest things. Even here at Raymond James, we assess this fairly frequently, to see, is this gonna be something we allow? We know many companies in our sector have blocked this technology. We know Italy as a country has blocked Chat GPT all together because of the same concerns. We know Amazon, for example, has found their own proprietary code inside as responses to people entering, and they had an internal policy that says, please do not share. And our policies are the same. Here's what I'm going to say about ChatGPT: that policy and that rule is no different than emails. If you have a policy that said "Do not share PHI, PII data in an email," same rule applies to the website, because this in reality is not a ChatGPT issue. This is a general issue. If I really wanted to share information, as I said, I can use the email and share that information. I can go on Google and type in the search bar, you know, my personal client's PHI, PII data and that's stored somewhere else as well. So we always say be smart, follow company policy that already exists. Don't share PHI, PII data and literally use it as a writer's block, but not as a final solution. Lenssen: Let's shift over to some of the creative capabilities. This is so interesting to me. ChatGPT can write its own music, it can write poetry, it can write plays. Is what comes out of it copyrighted or copyrightable? Who owns that creative material? Kvakic: So my answer will be the answer that I know so far. And what I know now, things could change, laws can be passed, but that is definitely a concern. We've seen in the news. a lot of buzz around this and said, "these are my lyrics." Here's where we stand right now. Law.com, if you go to that website, they said, we are far from having a final word on whether training AI models on datasets constitutes copyright information. Now if you do further, we've researched this in IT as well, if you dig deeper into this, here's kind of where we fundamentally say, if these lyrics that ChatGPT generates are based on multiple different lyrics and they are generated by this AI, then there's no really human involved in this spectrum. It's a computer, it's the data, it's the AI behind the scenes that generated this content. If there was a human involved in this, then it could be considered as copyright infringement. But because it reads a lot lyrics, you will read hundreds of different lyrics. And based on that, it will generate its own response on a different lyrics or whatever that we are looking at, email or a book or whatever. It's still being generated by AI without [any] human involvement behind the scenes. And just because of that alone, at this stage today, because it is truly generated by this AI, not really a human, therefore, we feel it's not eligible for any copyright infringements. And I mentioned it earlier, that's why it's called generative AI. Because it does generate the output based on the data that you get to behind the scenes. So, but time will tell, right? I mean, right now that's the answer. A lot of people are looking into this. Will the new laws be passed anytime soon that dictate otherwise? Possibly, but as it stands right now, it is not considered as a tool that has any results of copyrighted material. Lenssen: Let's talk for a minute about deep fakes. And for those of our listeners who may not be familiar with that term, think of a video or a photo that looks very, very real, but wasn't actually taken as a real video or a real photo. It's been manipulated or if you want to think of something being heavily photoshopped, but you can't tell. This very creative and capable technology, it seems like could be used to produce very realistic images or videos or what have you that weren't actually real. Have there been discussions around that? Kvakic: Absolutely. So deep fakes are a big concern. Video or audio. Audio is more than anything else because voice ID is used as a security way to login in some banks in U.S. and Europe. There's actually an article in Vice that somebody wrote where they used this technology to generate their own voice, and to phone prompts, they were able to type what they wanted to say. The audio generation would generate their voice and they were able to go through and pass the security. But in reality here is that there's always kind of technology to contradict or to detect deepfakes. Now the time again, in this one we will say how far these bad actors will go with this deep fakes and this technology, how fast it will evolve? And will the counter-technology to detect deep fakes are they going to go the same speed? And will they be able to catch up or they will be able to supersede what the bad actors are doing with deep fakes, but deep fakes in general are, in some content, in some way, fun. you know Tom Cruise on YouTube and if you haven't seen that, - Keanu Reeves, yeah - Yeah, it's fun, but it can be dangerous for sure. Lenssen: Let's talk a little bit more about those bad actors and potentially some risks there. I'm thinking about email phishing or spam phone calls or things like that. Where right now sometimes you can tell from the language or the wording in an email that it is not a real user or a native speaker of your language. And that's a giveaway for many recipients. Are there concerns that those attempts can become more and more realistic and potentially more dangerous with the use of this technology? Kvakic: I think the velocity of these attacks is a concern, not really the quality of the attacks. You mentioned earlier, non-English speakers right now, when they do send an email, there might not be spelled correctly, grammatically incorrect. And even though they tried to be sounding like they're your financial advisor, you can quickly read and say there's no way my financial advisor typed those paragraphs. So with ChatGPT and generation of emails, that ends up being a real concern to us. Raymond James is well-protected in this space. Smaller companies might be at higher risk because they might not have been investing in this technology as Raymond James IT and Cyber Threat Center did. So to us, it's the velocity but really not quality. These, these bad actors that are truly capable of penetrating into firm systems and decrypting or encrypting the data and and holding ransom and whatever. They don't rely on ChatGPT. They know what to do. It's not just maybe getting into the system is step one, but knowing once you are in the system what to do, hackers don't have time to say, wait, I just got in. Let me ask ChatGPT what to do next, right? So to us, it's the velocity of the attacks that, that is at risk. I think with ChatGPT and equivalent technology, I do feel that we'll be able to improve. I mean, we talked about early on different sectors where this can be used. Cyber, cyber security is definitely one field in IT that we can leverage this technology as well. Lenssen: We've touched on a handful of risks and unknowns. There are even so many more we haven't gotten to, things like bias or plagiarism by students, things like that. But overall, knowing that so many of these risks and unknowns still exist, when it comes to regulation, what do you see in the future for this technology? Kvakic: We know AI regulation is coming. Europe already started this with what's called GDPR which stands for General Data Protection Regulation. Now the question will be, how much? Competition and competitiveness in this space is extremely important. China will not regulate as much as potentially other countries. And what, what will they get out of that versus countries that do regulate AI heavily. So I think there's a competitiveness behind the scenes and leveraging this technology will be a factor, but we know regulation has to come in. I think if I can foresee something, I think it's gonna be around trust. We talked about this a little bit, in trusting the data in trusting the output. There might be some regulations around that. But I would be very surprised if it's around usage itself. Lenssen: I'm so appreciative for your perspective. This is such an interesting topic. I want to wrap up with one final question for our listeners. What do you and your team plan to be watching when it comes to this technology over, say, the year ahead? There's still so much to learn and know and uncover. What do you have your eyes on? Kvakic: Yeah, so I'm actually lucky that we have Raymond James as a company, that invests in IT and has invested in my team. We've been at this for about a year-and-a-half now. And our job, my role and my team's role is to prepare the company. Prepare Raymond James on what's to come. So we've been looking at a variety of different technologies, including of course ChatGPT and I want to use this opportunity for everybody that's listening here as well, and especially our clients and clients' clients, our financial advisors, is to say that even though you might not hear much about what's happening in IT, the team that I have, and the rest of the IT associates are focusing a lot in emerging technology. And if you've heard it, if they've heard it, chances are almost hundred percent we've heard about it as well. Now, with that said, ChatGPT as you mentioned earlier, has been blowing up in these couple of months. And so right now we're looking really at use cases. Where can we really use this? There's, there's questions around what is the really good use case where we can benefit from not just in... Raymond James doesn't innovate for innovation's sake. We can explore this technology and see how mature it is and can we really use it to make our clients more efficient, more competitive? To me, I think right now the big thing is, I feel, that personal assistant story that I just told you, Microsoft 365 Copilot, how can we leverage this technology? How can we apply it? How can we make this available to all associates to be more productive, to have the personal assistant next to them? So to me, what's next? Next is more. I feel a lot of, a lot of new exciting stuff will be coming in. Lenssen: Our Raymond James IT Head of Innovation, Kemal Kvakic. Kemal, thank you again for your time today. Really appreciate you sitting down and speaking with me about this. Kvakic: Paige, it's my pleasure. Thank you for having me. Lenssen: Listeners, thanks for tuning in. You can find more episodes of For What It's Worth on Spotify, Apple Podcasts, and RaymondJames.com. So be sure to subscribe. For What It's Worth, I'll see you next time. All opinions and information, including any price references or market forecasts correspond to the recording date listed in this episode's description. Any performance figures noted do not include fees or charges which would reduce an investor's returns. The information contained in this podcast is not research, nor does it constitute the provision of any investment, financial, legal, accounting or tax advice or recommendations to the listener. Raymond James and its financial advisors do not provide tax or legal advice and you should discuss any tax or legal matters with the appropriate professional. Past performance is not an indication of future results. There is no assurance any investment strategy will be successful. Investing involves risk and investors may incur a profit or a loss. Investment products are not deposits, not FDIC, NCUA insured, not insured by any government agency, not bank guaranteed, subject to risk and may lose value. Copyright 2020. Raymond James and Associates, Inc. Remember New York Stock Exchange, SIPC, Copyright 2020 Raymond James Financial Services Inc. member FINRA, SIPC. Raymond James and Associates, Inc. and Raymond James Financial Services Inc. are affiliates of Raymond James Bank.