{"id":3417,"date":"2023-07-21T16:23:30","date_gmt":"2023-07-21T16:23:30","guid":{"rendered":"https:\/\/www.clouddatainsights.com\/?p=3417"},"modified":"2023-07-21T16:23:35","modified_gmt":"2023-07-21T16:23:35","slug":"first-steps-toward-leveraging-enterprise-chatgpt","status":"publish","type":"post","link":"https:\/\/www.clouddatainsights.com\/first-steps-toward-leveraging-enterprise-chatgpt\/","title":{"rendered":"First Steps toward Leveraging Enterprise ChatGPT"},"content":{"rendered":"<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/06\/Depositphotos_642804356_S.jpg\" alt=\"Enterprise ChatGPT\" class=\"wp-image-3418\" width=\"750\" height=\"500\" srcset=\"https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/06\/Depositphotos_642804356_S.jpg 1000w, https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/06\/Depositphotos_642804356_S-300x200.jpg 300w, https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/06\/Depositphotos_642804356_S-768x512.jpg 768w, https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/06\/Depositphotos_642804356_S-930x620.jpg 930w\" sizes=\"(max-width: 750px) 100vw, 750px\" \/><figcaption class=\"wp-element-caption\"><em>Generative AI is changing the enterprise, but is everyone ready?<\/em><\/figcaption><\/figure><\/div>\n\n\n<p>What started as a public beta of the breakthrough solution ChatGPT has taken the world by storm. Worldwide, people (including me), some without a technical background, rushed to try it out. Perhaps because it seemed like a grand experiment or a game which attracted such a large audience. We had no sense of what it could do or&nbsp; how it could be used until we started \u201cprompting.\u201d It has taken a few months, but applications in all commercial and non-commercial spheres have emerged. And they promise to solve long-time problems like how to make it easier for humans to interface with information or language.<\/p>\n\n\n\n<p>We are seeing data conferences pivot to focus on generative AI, mostly because this is the topic that technical and business audiences alike are hungry to explore.&nbsp;<\/p>\n\n\n\n<p>What generative means for the enterprise still remains to be discovered. But to those watching the technology landscape, it\u2019s clear that technology providers are not losing time as they put their agility to the test to embed ChatGPT within their offerings at least as an interface to their solutions as Ed Thompson, Matillion\u2019s CTO and Co-founder described.<\/p>\n\n\n\n<p>Here is a summary of our conversation. (<em>See Ed Thompson&#8217;s bio below.<\/em>)<\/p>\n\n\n\n<p><strong>CDI: You mentioned that the Gartner AI &amp; Analytics Summit was different from other large conferences\u2013all the conversations you had with enterprise users and technology providers were about data and analytics. Were there any conversations that were surprising or particularly insightful?<\/strong><\/p>\n\n\n\n<p><strong>Ed:<\/strong> It&#8217;s clear that there\u2019s a shift coming. Everyone is finding that reality is changing a bit. Take the technology axis. Some of the advancements in AI and large language models are changing everyone&#8217;s job a little. Look at how the funding reality for growth businesses is changing. Half of the expo hall is probably made up of similarly funded growth businesses like Matillion. What that means is going to play out over the next year. Hopefully, we won&#8217;t get too many shocks but whether we do or don\u2019t, there&#8217;s still going to be a change over the next year. After synthesizing everyone&#8217;s opinions and thoughts here, I think \u201cbusiness as usual\u201c is not going to look the same in a year&#8217;s time.<\/p>\n\n\n\n<p><strong><em>See also: <\/em><\/strong><a href=\"https:\/\/www.rtinsights.com\/nearly-every-job-will-be-touched-by-generative-ai\/\"><em>Nearly will be touched by generative AI<\/em> <\/a>&nbsp;<\/p>\n\n\n\n<p><strong>CDI: LLMs and ChatGPT are making people either not sleep because they&#8217;re so excited or not sleep because they&#8217;re so worried. If you were to call out an emerging technology trend that is really going to have an impact on your company, what would it be?<\/strong><\/p>\n\n\n\n<p><strong>Ed:<\/strong> That&#8217;s the big one. We&#8217;ve spent some time trying to get a couple of levels deep on them because you&#8217;ve got to cut through the enormous amount of hype. But there is definitely something new at the core. ChatGPT is very good at some things and a lot less good at others so I think stage one is don&#8217;t ignore it. Then start leveraging it in the ways it works really well. So I was sort of glad to hear that some of our team in marketing and sales are already using ChatGPT to proofread and to punch up emails, and to make things more readable and that&#8217;s a really good use for it. That&#8217;s just driving efficiency in the business. Similarly over on the engineering side we find it\u2019s brilliant at code sets. I&#8217;ve written some code by prompting, \u201cPlease write me some unit tests around this code.\u201d By doing the job more efficiently, it\u2019s making code more resilient. It will be really interesting to see how that plays through. Does that change the size of the engineering team? Does it change the output productivity? Nobody knows right now.&nbsp;<\/p>\n\n\n\n<p><strong>CDI: What\u2019s standing in the way of some organizations really leveraging LLMs and ChatGPT?<\/strong><\/p>\n\n\n\n<p><strong>Ed: <\/strong>The other aspect of it, which is one of the reasons why ChatGPT is very good at coding and language is because it&#8217;s been fed with lots of training code\u2013like the whole of the internet. For data use cases it&#8217;s quite challenging for a company that has a particular model or a particular language where there isn&#8217;t an enormous data set to train on, just a relatively small data set to train on. If you train on a small data set you don&#8217;t get such fantastic results on the output. Some companies have large volumes of metadata about what they do and what their customers do. They are in a much stronger position. There have been some clever moves, like Microsoft buying GitHub, which means they have access to so much data that a customer doesn&#8217;t get. Now they can feed that into the algorithm and get great results, great tools out the back of it. That&#8217;s a real challenge for smaller vendors because they can&#8217;t just go up to GitHub and get much of the world\u2019s programming.&nbsp;<\/p>\n\n\n\n<p><strong>CDI: GitHub is still open source and an open repository, so anyone can train their model on the code, but you mentioned the metadata. It seems that it often comes down to the metadata, which is where you get those rich insights. Do you see a way to get access to the metadata for building models?<\/strong><\/p>\n\n\n\n<p><strong>Ed:<\/strong> Sure, it&#8217;s easier if you own GitHub. Many organizations have been able to build up a large cache of data about what their customers are doing. They&#8217;re going to be in a better position to leverage that as training data. The final bit that&#8217;s particularly interesting for Matillion as a player in the data integration space, is that there are lots of people who want to build their own large language models. One of our partners is Databricks which has customers with huge amounts of data from various sources and they\u2019re using Databricks to build LLMs. What\u2019s exciting for Matillion is that, as with any AI or ML, getting the data and doing the right data preparation are key. Matillion\u2019s role is to get the data, provide access to it, and transform it into a format that is suitable for feeding an LLM.&nbsp;<\/p>\n\n\n\n<p>Take the idea of scraping GitHub\u2013the data needs a heck of a lot of transformation and integration to make it a suitable data set for training. So even if a company were to analyze this data, they would still need assistance in preparing it to become good training data. I think being a kind of data broker in this new world is a really great position.<\/p>\n\n\n\n<p>The ability to build resilient data pipelines is still critical, because even with ChatGPT, when OpenAI went live with it, it was two years out of date. They&#8217;ve updated it some since then. It goes to show that actually constantly feeding into the data is hard. Matillion is all about the constant movement of data and constant transformation, even after launch. Everyone is talking about creating data products, but few people are talking about day one, day two, day three. Updating the data is going to be absolutely essential, right? We&#8217;ve all seen how horrific the mistakes are, or at least frightening and insulting. The problem is that these chat algorithms can have output that\u2019s only as good as the training data. You have to keep feeding it more data or less biased data so that the bias is written out of the equation if you will.&nbsp;<\/p>\n\n\n\n<p><strong><em>See also:<\/em><\/strong> <a href=\"https:\/\/www.clouddatainsights.com\/the-3-steps-to-make-responsible-ai-a-reality\/\"><em>The 3 steps to make Responsible AI a reality<\/em><\/a>&nbsp;<\/p>\n\n\n\n<p><strong>CDI: You mentioned resilience. What would you Matillion brings are the differentiations that you bring that are really necessary to for people to be able to crack this opportunity or this challenge?<\/strong><\/p>\n\n\n\n<p><strong>Ed:<\/strong> Matillion\u2019s core strength has always been data transformation. When we started using cloud data warehouse technology we did our transformations in the cloud data warehouse instead of doing it as data was moved in memory. Customers who wanted to keep an on-premises data warehouse could still use Matillion to do the transformations their business had always relied on and use their low-code tools like Informatica or Talend to maintain their data team\u2019s productivity.<\/p>\n\n\n\n<p>From where I&#8217;m sitting, there seems to some tension in data teams. Our customers\u2019 typical data team tends to have a mixture of people who come from a lab-coat background or from analytics tools like&nbsp; Informatica and Talend. These tend to consider the business user more. They are very data literate, understand the value of local tools getting the job done. And they&#8217;re being met by customers coming from the engineering users and are driving to do engineering best practices. They want to write code and are using frameworks like dbt or just pure SQL or Python and SQL for using notebooks and such. We are catering to both types\u2013there is no reason why you can&#8217;t have low code and high code all orchestrated together in your data pipeline. And sometimes you can actually transition between the two.To give you a concrete example, our data productivity cloud, puts source-control on everything we do. That&#8217;s always been a difficult thing to do in a lab coat. We want to crack that so customers feel like they&#8217;re managing their data assets in exactly the same way as they would manage source code assets on a development team. That&#8217;s seen as the best practice probably because it is.<\/p>\n\n\n\n<p><strong>CDI: Source control is probably not just hard for the lab coats on the data team to deal with. I think that it might be a foreign concept to those who create products for the business user.<\/strong><\/p>\n\n\n\n<p><strong>Ed:<\/strong> if you look at data teams that are low-code only, it&#8217;s not that they don&#8217;t do it. It&#8217;s just that they&#8217;re less mature and don\u2019t think in that way.&nbsp;<\/p>\n\n\n\n<p><strong>CDI: The business user is having so much more influence now on how data is accessed and delivered. What are you seeing with your customers?<\/strong><\/p>\n\n\n\n<p><strong>Ed:<\/strong> I&#8217;ve always been surprised by how big our customers\u2019 data estates grow.&nbsp; Good data and good data integration leads to good analytics, which leads to more questions, which leads to a request for data. The cycle goes around and it matures. Despite the fact that we built on these really scalable cloud data platforms, customers were still managing to come in with scaling problems as data volumes got really large, but also the amount of analytics and data transformation they were doing. We&#8217;d expect them to be running 10 or 15 simultaneous data pipelines but they were running 100 pipelines simultaneously.&nbsp;<\/p>\n\n\n\n<p><strong><em>See also:<\/em><\/strong> <a href=\"https:\/\/www.clouddatainsights.com\/explore-the-mutual-advantages-of-generative-ai-and-the-cloud\/\"><em>Explore the Mutual Advantages of Generative AI and the cloud<\/em><\/a>&nbsp;<\/p>\n\n\n\n<p>One of the drivers for this growth of data is the number of sources that are available now. We have connectors to almost everything that our customers come to us with. For example, we have connectors to SnapChat and Instagram. I\u2019m not sure where the business value is with those. Then customers don\u2019t always ask themselves whether the executive team really needs that data. And does it need it in real time?<\/p>\n\n\n\n<p>When customers ask about streaming data and real-time date, I draw a chart with cost along the bottom and time on the vertical. The more speed you want will cost more. If you can live with data being one minute out of date, you can have that relatively affordably. If you want it in one second, that&#8217;s going to cost a lot. If you want less latency than that it&#8217;s very, very expensive.<\/p>\n\n\n\n<p>So 100 pipelines to maybe as many sources and data formats with various latency rates. Making sure that scales and remains resilient is not easy.&nbsp;<\/p>\n\n\n\n<p>Customers have been moving away from deploying infrastructure on-premises. They were super happy to get out of the data center into the cloud. Now they don\u2019t want to run instances in the cloud. It&#8217;s expensive. Besides they don\u2019t want the management burden. The only thing they want to worry about is data sovereignty, which is usually solved by having a hybrid approach. In end, they want to make sure their data pipelines run, and if they&#8217;re not running correctly, for it to not be their problem to fix.<\/p>\n\n\n\n<p>&#8212;<\/p>\n\n\n\n<p><em><strong>Ed Thompson Bio:<\/strong> Ed Thompson is CTO and co-founder of Matillion. He started his career as an IBM software consultant, and spent 11 years consulting for some of the premier blue-chip companies in the UK. Along with CEO Matthew Scullion, he launched Matillion in 2011 and set about building a crack team of data integration experts and software engineers. He and his team launched\u00a0 Matillion\u2019s flagship ETL product in 2014, which has driven the company\u2019s growth ever since. Ed\u2019s strength is his ability to bring together best-in-class technologies from across the software ecosystem and apply them to solving the deep and complex requirements of modern businesses in new and disruptive ways. He is a graduate of the University of Salford with a degree in Computer Science. A proud father of three (plus two dogs), he has recently taken up training assistance-dog puppies for blind people.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Ed Thompson, CTO and Co-founder of Matillion talks to CDI about the last three months\u2019 debate over ChatGPT and generative AI.<\/p>\n","protected":false},"author":44,"featured_media":3418,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"om_disable_all_campaigns":false,"_uag_custom_page_level_css":"","neve_meta_sidebar":"","neve_meta_container":"","neve_meta_enable_content_width":"","neve_meta_content_width":0,"neve_meta_title_alignment":"","neve_meta_author_avatar":"","neve_post_elements_order":"","neve_meta_disable_header":"","neve_meta_disable_footer":"","neve_meta_disable_title":"","neve_meta_reading_time":"","_themeisle_gutenberg_block_has_review":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[16],"tags":[215,148,190],"class_list":["post-3417","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-data-architecture","tag-chatgpt","tag-enterprise-cloud","tag-generative-ai"],"acf":[],"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/06\/Depositphotos_642804356_S.jpg","uagb_featured_image_src":{"full":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/06\/Depositphotos_642804356_S.jpg",1000,667,false],"thumbnail":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/06\/Depositphotos_642804356_S-150x150.jpg",150,150,true],"medium":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/06\/Depositphotos_642804356_S-300x200.jpg",300,200,true],"medium_large":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/06\/Depositphotos_642804356_S-768x512.jpg",768,512,true],"large":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/06\/Depositphotos_642804356_S.jpg",1000,667,false],"1536x1536":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/06\/Depositphotos_642804356_S.jpg",1000,667,false],"2048x2048":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/06\/Depositphotos_642804356_S.jpg",1000,667,false],"alm-thumbnail":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/06\/Depositphotos_642804356_S-150x150.jpg",150,150,true],"neve-blog":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/06\/Depositphotos_642804356_S-930x620.jpg",930,620,true],"rpwe-thumbnail":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/06\/Depositphotos_642804356_S-45x45.jpg",45,45,true]},"uagb_author_info":{"display_name":"Elisabeth Strenger","author_link":"https:\/\/www.clouddatainsights.com\/author\/estrenger\/"},"uagb_comment_info":0,"uagb_excerpt":"Ed Thompson, CTO and Co-founder of Matillion talks to CDI about the last three months\u2019 debate over ChatGPT and generative AI.","_links":{"self":[{"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/posts\/3417"}],"collection":[{"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/users\/44"}],"replies":[{"embeddable":true,"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/comments?post=3417"}],"version-history":[{"count":2,"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/posts\/3417\/revisions"}],"predecessor-version":[{"id":3601,"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/posts\/3417\/revisions\/3601"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/media\/3418"}],"wp:attachment":[{"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/media?parent=3417"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/categories?post=3417"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/tags?post=3417"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}