Day 0 Event #187 Your Organization Is Ready for AI, But Is Your Data
Day 0 Event #187 Your Organization Is Ready for AI, But Is Your Data
Session at a Glance
Summary
This presentation focused on artificial intelligence (AI) readiness, particularly in the context of generative AI and its impact on organizations. The speaker, Alaa Zaher from Gartner, discussed the evolution of AI from traditional machine learning to generative AI, highlighting the revolutionary capabilities of large language models like ChatGPT. He emphasized that while generative AI has made AI more accessible to individuals, organizations face challenges in implementing it safely and effectively.
Zaher introduced the concept of a “technology sandwich” to describe the evolving AI landscape in enterprises. This framework includes layers for data sources, AI platforms, and governance structures. He stressed the importance of data management, semantics, and fine-tuning in preparing for AI implementation. The speaker also highlighted the shift from centralized, structured data to decentralized, unstructured data in AI applications.
The presentation addressed the risks associated with AI adoption, including data interpretation errors, security concerns, and the phenomenon of “bring your own AI.” Zaher emphasized the need for organizations to develop robust governance structures and risk management practices to mitigate these challenges. He noted that while AI vendors are racing to embed AI in their products, organizations should take a measured approach to AI adoption, focusing on productivity improvements unless their industry is being disrupted.
The discussion concluded with questions from the audience, touching on topics such as successful implementations of the technology sandwich concept, the role of major AI labs in ensuring AI safety, and the trend of organizations developing their own generative AI models. Overall, the presentation provided a comprehensive overview of the current state of AI readiness and the considerations organizations must address as they navigate this rapidly evolving landscape.
Keypoints
Major discussion points:
– Overview of AI history and recent developments in generative AI
– Components needed for AI readiness in organizations (data, talent, infrastructure)
– New AI paradigm emerging with unstructured data and decentralized applications
– Importance of data management, semantics, and fine-tuning for AI
– Concept of the “technology sandwich” for managing AI in organizations
Overall purpose:
The purpose of this discussion was to provide an overview of AI developments, particularly generative AI, and discuss what organizations need to do to prepare for and effectively implement AI technologies. The speaker aimed to highlight both the opportunities and challenges of AI adoption.
Tone:
The overall tone was informative and educational, with the speaker taking on the role of an expert explaining complex topics to an audience. The tone remained consistent throughout, with occasional moments of humor or lightheartedness to keep the audience engaged. The speaker maintained a balance between enthusiasm for AI’s potential and caution about its risks and implementation challenges.
Speakers
– Alaa Zaher: Senior Executive Partner at Gartner, technology research and digital expert
– Audience: Multiple unnamed audience members asking questions
Additional speakers:
– Amal: Audience member who asked a question about successful applications of the “technology sandwich” concept
– Mohamed: Audience member who asked about organizations developing their own generative AI models
– Martina Legal-Malakova: Audience member from GAIAxApp Slovakia, focusing on data spaces and data sharing
Full session report
AI Readiness and Implementation: A Comprehensive Overview
This detailed summary expands on a presentation by Alaa Zaher, Senior Executive Partner at Gartner, focusing on artificial intelligence (AI) readiness and its impact on organisations, particularly in the context of generative AI.
Evolution and Impact of AI
The discussion began by tracing the evolution of AI from traditional machine learning to the current era of generative AI. Zaher emphasised the revolutionary capabilities of large language models like ChatGPT, which have made AI more accessible to individuals. This democratisation of AI technology is changing how people interact with and utilise AI in their daily lives.
The impact of AI on industries was highlighted, with Zaher noting that an increasing percentage of CEOs believe generative AI will have a significant impact on their sector over the coming years. This underscores the urgency for organisations to consider their AI readiness and strategy. Zaher further illustrated the rapid pace of AI development by stating that new foundation models are being created at an unprecedented rate, emphasising the intense competition in the field.
Data Requirements and Management for AI
A crucial point in the discussion was the importance of data for AI systems. Zaher emphasised that without data, there can be no AI, highlighting the critical importance of data in a relatable way. This led to an explanation of different data types and sources needed for various AI applications.
The presentation highlighted a shift in data requirements from traditional machine learning, which relied on structured, centralised data, to generative AI, which can work with unstructured data from various sources. Despite this evolution, Zaher stressed that data management and governance remain necessary for effective AI use. Organisations need to focus on data semantics and fine-tuning for AI implementation.
The concept of data sharing was also discussed, touching on its implications for both social and economic spheres, particularly in sectors like manufacturing and energy.
Enterprise AI Readiness and the “Technology Sandwich”
Zaher introduced the concept of a “technology sandwich” to describe the evolving AI landscape in enterprises. This framework includes layers for data sources, AI platforms, and governance structures. The bottom layer consists of various data sources, including structured and unstructured data. The middle layer comprises AI platforms and tools, such as large language models and other AI technologies. The top layer focuses on governance, including security measures, access controls, and risk management practices.
He emphasised that organisations need to prepare for a new AI paradigm with decentralised applications and develop this “technology sandwich” approach to manage AI risks and implementation. This framework helps organisations understand the components necessary for successful AI integration and the potential challenges they may face.
The discussion touched on the trade-off between using third-party AI services and developing in-house capabilities. Zaher used a metaphor to communicate the risks associated with giving AI systems access to unstructured organisational data, comparing it to letting someone into a messy room. This led to a discussion about the importance of data management, access rights, and security considerations when implementing AI systems in organisations.
Challenges and Considerations for AI Adoption
The presentation addressed several challenges associated with AI adoption, including data interpretation errors, security concerns, and the phenomenon of “bring your own AI.” This refers to employees using personal AI tools for work purposes, potentially exposing company data to external systems. Zaher emphasised the need for organisations to develop robust governance structures and risk management practices to mitigate these challenges.
There was agreement on the importance of strong security practices and governance for successful AI implementation. The discussion also touched on the need to balance the push from AI vendors with organisational readiness. Zaher noted that while AI vendors are racing to embed AI in their products, organisations should take a measured approach to AI adoption, focusing on productivity improvements unless their industry is being disrupted.
Cost implications of AI implementation were also discussed, with Zaher highlighting the potential for significant expenses related to data preparation, model training, and ongoing maintenance of AI systems.
Audience Questions and Future Considerations
The presentation concluded with several thought-provoking questions from the audience:
1. An audience member asked about successful applications of the “technology sandwich” concept, seeking real-world implementation examples.
2. Another inquiry focused on the role of major AI labs like Google DeepMind and OpenAI in ensuring AI safety within the technology sandwich framework.
3. A question was raised about whether enterprises are expected to develop their own generative AI models to protect their data, or if big tech companies will dominate this space.
4. An audience member questioned the categorisation of data sharing in the context of data analytics and AI, exploring its implications for both social and business initiatives.
These questions highlighted unresolved issues in AI implementation, including the full extent of generative AI’s impact on various industries, best practices for balancing vendor pressure with organisational readiness, optimal strategies for cost management in AI deployment, and the role and implications of data sharing across different sectors.
In conclusion, the presentation provided a comprehensive overview of the current state of AI readiness and the considerations organisations must address as they navigate this rapidly evolving landscape. It emphasised the need for a measured approach to AI adoption, strong data management practices, and robust security and governance frameworks to ensure successful and responsible AI implementation.
Session Transcript
Alaa Zaher: you You You You That’s me You Okay Can everyone hear me Hello, yes, okay great So I’ll I’ll be raising my voice like this in case you’re not actually using the headset You can still hopefully hear me. I think it makes for a more natural interaction So where you like and then because my voice is also probably quite loud You can if you’re using the headset, you can turn down the volume So first of all, thank you for coming and welcome Like to introduce myself. I’m a let’s say her a senior executive partner at Gartner Gartner is a research technology research and digital Any And we have of being here today in the IGF digital government agency event To talk about a number of things. My colleagues have been talking about and I’ll be talking specifically about readiness when it comes to artificial intelligence and specifically on data, but I’ll also expand the discussion beyond the data aspect of the readiness. So, I’ll basically be looking at a little bit of history of AI, the impact of generative AI, and then what that means for consumers, what it means for enterprises, and then we’re going to zoom in on the enterprise part to really look at what it takes for you to build the right capability in your organization in order to harness the power of artificial intelligence. How does that sound? All good? All right, let’s get going. So, I’d like to start with this quote from this gentleman over there, Sir Arthur Clark. You may not know him, but he is a British futurist, screenplay writer, somebody who’s really embedded into innovation, and he said that any sufficiently advanced technology is indistinguishable from magic, right? Come to think of it, two years ago, 2023, end of 2022, 2023, I think what we saw with this thing, ChatGPT, is nothing short of magic, right? I think we all agree, the world was stunned by what this chatbot can do, right? And you would think, we’ve had chatbots for ages, yeah? So, if you break down the word ChatGPT, we’ve had chatbots, they weren’t as magical, they weren’t as spectacular. So, what is this that makes it so magical? It’s the other part of the name, the GPT. And what is GPT? Well, it’s short for Generative Pre-trained Transformer. Never mind, we’ll just call it a large-language model. Never mind. So what is a large-language model? It is one derivative of artificial intelligence, right? Intelligence has been around for decades, right? But it is the large-language models, the generative AI, basically. So if you kind of look at the landscape of artificial intelligence… Sorry, just before that. So in case you were hiding under a rock over the past two years, let me show you what ChatGPT can do, right? So when I first laid my hands on technology, I asked it this question. I wanted to test it. I said, summarize the story of Cinderella in 100 words. That’s this chatbot. And it went in a split of a second and generated for me this wonderful summary in a split of a second, in 100 words. Cinderella recounts the story of a kind-hearted, mistreated young woman living with her wicked stepmother and said, blah, blah, blah, blah, blah, blah. Brilliant. I couldn’t have summarized it this way in two hours, maybe even half a day, right? Incredible. Now I challenged it a little bit further. So I asked it, now, can you summarize it for me in 50 words? And again, bang, in a split of a second, it did it in 50 words. In fact, it was 51, to be honest, so just a little bit over. Nonetheless, it’s incredible. Now I really wanted to test it even further, so I kind of pushed the boundaries. And I said, what if I ask it in a different language, in Arabic? Mind you, this was ChatGPT 3, not even 3.5, right? So I asked it, I gave it a really big challenge. Give me a piece of poetry in Arabic. So this is the question I asked it for us, for those of us who are Arabic speakers. speakers that’s what I asked it okay now that that’s quite a challenge for for a computer right and there you go that’s what it gave me behold the mulukhiyah tia fatah nama to an adobe tune for Tommy Tata Rojo fill maklouba Tosca buff and I’ll add three or four huddle Tahi Tata Raqqa so I slam Wow incredible isn’t it and then it goes on and on all right so it might not be the perfect poetry but it’s incredible you know and again it generated in seconds so that is what Chad GPT is capable of that’s what large language models are capable of and this is just the tip of the iceberg because now we have not just 3.5 not for not for but 4.5 for oh that is so what let’s go back and to the technical discussion so the large language models they are a subset of generous AI right there are different types of generative AI models and large language models happen to be the one that Chad GPT uses because it’s for words and for verbal communication but generative AI itself is a subset of machine learning right and that is basically the foundation of most artificial intelligence applications it’s not the only one but it’s the most common one it’s delivering for the more than 15 years so let’s think about you know let’s talk about machine learning for a moment what you know what does it work machine learning basically we’ll take the example the classic example of what machine learning does classification right this is you know the basic function that a machine learning algorithm can actually perform classification so let’s say we want the machine learning model to to classify a picture of a dog when it sees one, right, sees one, when we introduce it to it. So what do we do? We give it the training data set, as many pictures of dogs as we possibly can, right, as many as we can. Training data. Now, go ahead, examine those pictures, and we run it through a statistical model. There are many different statistical models that are used for predictive analysis of the model. Some of the most common are linear regression. You might have heard of decision trees. So these ones, essentially what they do is basically this. They examine each picture. They look for patterns in the pictures, and through the pattern, create a set of rules. And once it has the rules embedded, typically the rules are embedded in a black box, so it doesn’t expose the rules to us, right? We only can judge it by the result. So we present to it the picture, another picture that it hadn’t seen in the past, of a dog. And if the rules are correct, it will identify that this is indeed a dog. And then another one. Oh, it says, that doesn’t conform to my rules. That is not a dog. We present it with a third picture, which happens to be of a dog, doesn’t quite fit the training data, and it gets confused. And so what do we do? We take that picture, feed it back to the model as part of the training data set, and it keeps learning. So this is what we call supervised learning, right? So supervised learning includes this kind of reinforcement, and it also includes what we call feature engineering. So as we were giving it the pictures of the dogs, we might have given it a little help with labeling. Like, this is what a nose of a dog typically looks like. We call it feature engineering, right? So that’s the essential mechanism through which machines. learning models work. Now, let’s project that on what’s happening in the generative AI world, in the large language models. How does that differ? You’ve got the training data. The training data, what do you think is the training data for ChatGPT? It’s text, right? What kind of text? Is it some text that some company gave it to ChatGPT? It’s from the web. You’re absolutely right. It’s not just from the web. It’s the entire World Wide Web. It literally is the entire World Wide Web. 500 billion words. And you might think, how on earth did it ingest the entire World Wide Web? There are tools to do that. You can do it, yeah? You can actually download the entire World Wide Web. So, 500 billion words. It was trained on that. We ran it through that statistical model. And here’s where it gets a bit different. So, remember that GPT part? That generative pre-trained transformer? It’s a transformer architecture. And that transformer architecture is really good at identifying context. If you really want to think about it, it is like an autocomplete on steroids. That’s really what it is. So, you think, when you’re typing on your iPhone a message and it kind of goes… Like I always tell my wife, I’m going to be… And then it says late, right? So, how does it know that? Because it’s just found out that I’ve said it so often. And then it autocompletes. That’s exactly what the pre-trained transformer does. It’s a great autocomplete. So, it gets the context without us having to label for it that feature engineering that I was referring to in the classic machine learning. So, this autocomplete is what allows it then to summarize the story of Cinderella so quickly and without supervision. Because it’s learned the patterns and the context from the training data happens to be the entire worldwide web. And so we get this wonderful summary that we were just looking at. So that’s as far as the large language models and generative AI is concerned. So undoubtedly, generative AI is a revolutionary milestone in the world of artificial intelligence. But we need to remember that artificial intelligence is more than generative AI, and it has been delivering. Let’s remind ourselves of what it has been delivering for us. Some of the things we take for granted, your face ID on your iPhone, that’s computer vision, which is a form of machine learning. If you think of, on top of that, what it does in terms of identifying our friends and family members from our photo albums, et cetera, that is also machine learning, computer vision. Now, take something, again, that never probably crosses our minds in terms of we watch a football game or a sports, and we can see all that is going on in the screen. It’s identified that there’s a player that’s moving there. The shot’s going through that way. All of that is happening in real time, and that is artificial intelligence. Not just that, and specifically in sports, a lot of clubs are using artificial intelligence, have been using artificial intelligence, to actually inform them of the right team formation. So they study the opponent beforehand, and it informs the coach of what kind of formation, who they need to put on the field, et cetera. Liverpool is topping the Premier League this season, and they have been in many past years. And they have a reputation for having one of the strongest AI teams in the Premier League. They’re really leveraging that in a way that allows them to choose the players, so they don’t often get the best stars, right? You can see that they’re getting the right players to allow Liverpool to win. Right, but they’re not necessarily paying the big bucks for the biggest stars So so that’s that’s another Implementation of AI something else that’s been happening and is happening to us right now Through all the social media apps, you know Your Facebook’s your Twitter’s your Instagram’s etc your tick tocks. And again, that’s all happening in the back doing it’s looking at our behaviors and Basically analyzing it in order for it either to suggest to us what post we should look at who we should follow or actually deliver to us a an ad right an ad that that that it predicts is going to be of Interest to us based on our individual behaviors, right? So that’s been going on for at least the past decade hasn’t it right and we’re all bombarded by social media in many different ways That’s artificial intelligence is machine learning Now we move on to the enterprise world in the world of business, right one industry that has been benefiting from AI has been the insurance industry, so it typically you make an insurance claim and You know, it either gets approved or rejected based on the kind of damage the analysis of the accident, etc Assessing the actual cost of the repair that used to be done by humans today It’s it’s supervised by humans But essentially a lot of the effort that goes into this analysis is cut through artificial intelligence So you run the pictures through an AI model that machine learning model and it’s able to tell To give you a recommendation on what to do with the claim whether to approve it or not Now I come from a telecoms background um, we used to use artificial to our network planning So, you know when you’re handling 40 million customers as my company was, you’re basically looking at a very fluid environment of usage, right? So you have certain times of the year where there’s going to be a lot of demand in a particular area, then less demand elsewhere, and you need to be in that dynamic position to predict the usage. We used artificial intelligence to tell us by looking at thousands and petabytes of data, basically, to allow us to predict where the demand is going to come from, where we’re going to have the shortages, et cetera. And not just that, we also used it for understanding the customer behavior. If certain customers were likely to leave our network to the favor of a competitor, then we’d be able to, it would be able to give us early warning signals that we need to save that, rescue that customer by giving them some compelling offer. Another machine learning application, predictive maintenance, given the manufacturing industry, you don’t want to wait till a piece of equipment it stops your production line to anticipate that early enough, and machine learning has been helping manufacturers do that, preemptive maintenance. Right, so as we’re talking about AI for enterprises, what does it take to enable machine learning in your enterprise, is the question. The first element is data, right? No data, no AI, sounds like a song from Bob Marley, right? So no data, no AI, but it’s data, and actually so much data. So I was just telling you about the example from telecoms, we literally were processing petabytes of data on a daily basis, right? So the more data you have, the more opportunities you have for your machine learning models to learn, right? Take the example of the World Wide Web, or when we talk about the dogs. So the more data you provide, the more reliable it’s going to be. So the first element is data. The other element is the geek. A key component of any machine learning environment, of any AI environment, is a data scientist. So when we talked about those complicated models, you will like decision trees, regression analysis, et cetera. You need somebody who knows how to program. So they need to actually have a combination of two skills. They need to be someone who understands statistics and also someone who’s good at programming, typically Python, for example, or R. So you’ve got that combination of very rare skillset of a data scientist. And there’s been a race to hire those people. I can tell you, we were hiring them in my company. And they would stay with us for a year. And they were off to double their salary or something. So it’s a very competitive market. And that’s what makes it difficult for companies to grow that AI capability within the organization impediments. And what else do you need? Well, if you want to process the petabytes of data, you need a huge data center, right? So you need a lot of storage, you need a lot of compute. So that is something that actually is now not absolutely necessary to own, because we have cloud, right? And I think that’s a great thing about the fact that we have cloud. So cloud saves us to invest in huge data centers, and especially that you don’t need all that capacity on an ongoing basis. You need it when you run a model at a particular point in time. So if you get that elasticity from a cloud, then you just use it when you need it. And you’re not paying for a full-scale data center in that manner. So if we were to summarize the components, so you’ve got the computing and the algorithms, they’re pretty much something that are accessible to any organization today. Why? Because the likes of AWS or Google, they will provide those to you and you can pay as you go, so you don’t, so it’s not, there’s no real obstacle there. I mean the obstacle it might be that you, you know, if you do it a lot then the bill might go a little bit high, but at least it is accessible, right? So the models, you know, they exist on platforms like AWS and Google and Microsoft and the compute likewise. The challenge is here, right? The data and the talent, so that, and I think that’s what has held back organizations from progressing on AI over the past years. Only those who have been able to capture the data and the talent are the ones that have been able to make a difference through AI in the core of their business, right? So that, that’s as far as the classic AI, the machine learning is concerned, but that paradigm is changing because generative AI is imposing a new paradigm, right? Specifically what is changing? AI is becoming everyone’s business. It’s becoming accessible to everyone. You don’t need to invest in the data scientist and the data in order to actually have some generative AI capability, right? So think of this. How many of us are able in our day-to-day work to leverage generative AI to help us with our writing? Show of hands, please. Okay, that’s the majority. Maybe PowerPoint? Less? Yeah, okay, that’s great. So it is accessible to us because it is, it’s just so easy. You don’t need to actually… buy anything you just you know pay pennies and sometimes even free tools and likewise for illustration creative work you know the other these are some of the people who have you know leveraged AI and maximize the use of it you know whether it’s artists whether it’s composers etc so that is something that is becoming accessible now of course developers software developers you know systems very much a commonplace today many developers are leveraging AI to help them with that and finally I think last but not least is learning right so and that learning can start from instead of googling I’ll just ask you know the like of a chat GPT and it’ll give me an answer or it could actually be an actual learning but like we have in Khan Academy for example if you’re familiar with that so there’s an actual tutor that helps you and has that discussion with you until you feel that you’ve actually grasped the topic so really generative AI is allowing artificial intelligence to become ubiquitous accessible at the consumer level right at the individual level of the personal level and so what’s it that now do it necessary the geek in every use case of AI do we need the data scientist the examples that I mentioned we don’t know they know this is sitting in the background somewhere in open AI or in Microsoft but on a day-to-day basis we don’t need them in an hour in our own organization and so there’s it’s out with the geek and in with what in with natural language conversation you ask the AI you know I please generate for me a PowerPoint presentation about bum bum bum bum bum bum bum bum and it got it you got it I’m very impressive tool that I had a look at recently it’s called builder dot AI so this is basically a piece of software that allows anybody to have a conversation with a chatbot verbally and tell them I want to build a web page for a marketplace where and you give them a description of the marketplace that you have. Goes in the background, generates for you the website. It’s that incredible. So really we’ve kind of, we’re using natural language conversation and that’s what makes it so compelling. And the list goes on and on I mentioned Builder AI but you know look at the hundreds of startups that are coming into this space. Startups every day. In fact you know we have a statistic from Gartner as you know the number of generative AI foundational models that are created. How often do you think we’re seeing a new foundation model? I’ll give you some choices right so that’s once a month new foundational new generative AI model. Once a month? Once a week? Once a week sounds reasonable. Yeah well it’s actually two and a half days. Every two and a half days a new foundation model is created. Now that is the race. There’s a race for a land grab on AI specifically driven by generative AI. Now so here comes the question of this presentation. So is your organization ready? Well I’ll give you another statistic. This is a survey also from Gartner in the past few years. That’s before 2023. We were typically asked our clients who are technology leaders about what they think of AI. Whether they think AI will significantly impact their industry. This is a survey with CEOs. And so the question I asked them was, do you think AI will significantly impact your industry? A lot of CEOs kind of felt that this was, you know, a bit distant from their business, from their industry, like AI, you know, what do I think of when I hear the word AI? So it wasn’t, only 20% said they did, only 20%. Until in 2023, that changed to 59% of CEOs believing 59% of CEOs believing it will make a difference in their industry. And then last, this year, in 2024, this jumped up to 74%, right? 74% of CEOs that we have surveyed believe generative AI will have a profound impact on their industry, right? Now, what this tells us is that there is certainly a big appetite for AI as far as leadership is concerned. So we work with a lot of clients, and we’re seeing that pressure with the technology leaders that we work with. Now, they are asked to do something with AI. There’s a fear of missing out. There’s something we need to do here. How can we just watch there and miss the boat? So that’s a reality. Also, if you look at Gartner’s hype cycle, Gartner’s hype cycle is basically a reflection of the different emerging technologies and looking at their state of adoption and maturity. So generative AI is at the peak of inflated expectations, and now it’s kind of normalizing. It’s kind of normalizing now. But generally, what you’re seeing there is, there is a wide adoption of generative AI. So when I ask the question, is your organization ready for AI? I think the simple answer is, organizations have expectations from AI. Right? So that is certainly a fact. Well, that’s good news, right? That’s good news. So there’s this eagerness. There’s this hunger for AI. But now comes the question, is your data ready for AI? Now, the data discussion on AI is a bit nuanced, because we talked about machine learning, and we talked about generative AI. So they’re not exactly the same animal. Let’s have a look at that. So typically, this is what a data and analytics landscape would look like in terms of its components. So you’ve got different data sources, operational systems, mobile applications, websites, et cetera. And then you’ve got some infrastructure there related to analytics, whether you’ve got a data warehouse, a data lake, or smart. And then you’ve got integration mechanisms like data streaming, batches, ETLs. And you’ve got then data governance, which is basically more of a management activity. And then you’ve got virtualization layers. And then you’ve got the actual presentation and analysis layers related to data science, machine learning. You’ve got business intelligence, which has been the mainstay in the past decades. And then you can actually build on top of that some external services. So that’s kind of the overall ecosystem, if you will. If we simplify it a little bit and think of a data warehouse, because this is really where this all originated. A data warehouse, basically, it tries to capture all the data that you have in your organization and centralize it into a central repository that can then serve the organization in terms of insights. The insights don’t necessarily have to be AI. They could be just analysis through Power BI reports, for example. So typically, what you have there is a what we call a… call an ETL, extract, transform, and load transaction. So you’re trying to collect the data for all those different operational databases and put them in a staging environment, structure them in a way. The key word here is structure. So we really, the big effort we made there was all about structuring the data, preparing the data for consumability, right? So we had to do that through the transform and load. And then we put it into the data warehouse. And once it’s in the data warehouse, let’s build a little data mart for our marketing guys, another one for our finance guys, another one for our operations guys. So where they can actually consume the data through reports from things like Power BI, et cetera. So that is the classic way of going about your data and analytics environment. The key words there were two. There’s data, there’s structured data. All of that is based on structured data. And there’s centralized data, right? We’re trying to centralize the data as much as we can, and we’re trying to structure it. And we’ve got centralized technology. Now, when you think of generative AI, it’s, like I said, it creates a new paradigm. You don’t have to have structured data. You don’t have to have centralized databases or even centralized technology. So that is changing. And let’s have a look at what that means. If you think of the use cases, we’ve been asking our clients, you know, using generative AI, where has it delivered for them? And in most cases, like we’ve got 21% saying that in software development, it’s been most effective, right? 19% saying in call center and help desk, it’s been very effective. And 19% in marketing content creation, and HR self-service, 4%. Right? So these are kind of the use cases that are developing. They’re changing by the day. But these ones have kind of proven themselves in a way more than others. But let’s think about those use cases. Take a moment to kind of zoom in on each one of them and look at what it really means in terms of data. So if you think of a call center agent, they take the call. And very much like what’s happening with me now, the call is being transcribed in real time, right? So the generative AI is playing in the background. It’s listening to the agent, listening to the customer. And it starts interpreting what’s going on. And through the intelligence that it has, through the access it has to corporate policies, our customer care portfolio, et cetera, it’s actually recommending to the agent what they need to do, what to advise the customer on the call, right? So not just that. After the call is over, it’s able to assess the agent and actually do the work of what a supervisor would typically do in a back office. So that is a compelling use case. And it’s working very well. We haven’t yet reached the stage where we’re saying we’re replacing the customer service agent. It’s probably going to happen maybe two years from now, five. I don’t know. But it’s probably going to happen. At the rate of acceleration that we’re seeing with the maturity of the technology, it will be good enough. Right now, it’s about assisting a customer service agent. But when you think of what that means in terms of data, what key data item have we used there? Data asset have we used there? It’s an audio file. It’s not even an audio file. It’s live audio, right? And perhaps also combined with our policies and our regulations and service portfolio. And again, that is something that’s probably in a PDF document. or something. Another use case that’s quite common, AI for resume screening. That’s being extensively used by the HR folks. And that’s basically the data asset there is email. So that’s unstructured data. Think of an advisor on legal. That’s another use case that’s also picking up. So use AI to advise you on legal by lawyers, basically. Likewise for HR also, when it comes to your HR policies. So what are we looking at here? We’re looking at a PDF repository. And a PDF repository is also a form of unstructured data. It’s not tabular. It’s not something that you can put into a database. And if you think of programming and software development, the data source there is a Git repository. It’s a code repository. So as you can see, the theme that we’re building here is that the data is very much unstructured. And when you think of unstructured data, you need to think of a messy room. So imagine yourself walking into a messy room. And there’s data everywhere. I mean, there’s data. We can’t even see it. But the beauty of generative AI, before generative AI, if you think of this analogy, we would have to clean up every inch of that room in order for us to use the data. But with generative AI, you don’t need to clean it anymore. You just leave it up to generative AI. And you’re able to pick up the data lying on the floor, the data on the sofa, the data in the pot, and even the data that we’re not seeing. It will figure out that there is a pair of running shoes under that cupboard. They’re size nine, and their color is pink. So it’s actually identifying the data that you’re seeing. And you can think, wow, that’s amazing. I don’t need to structure my data anymore. I don’t need to do the housekeeping. I can be lazy. Quite, to some extent, but not quite. Why? Because first of all, it’s expensive. So if you’re going to fully rely on generative AI to do the housekeeping, it’s an expensive housekeeper. But there’s another big reason why. Because of the risk. Right? So think of who you are going to let into your room. Who are you going to allow to touch your stuff? Right? So access rights is an extremely important part. You let it in, you will basically to vacuum everything that it can. It will label everything. It will capture all the data. And that might not go well for you. Think of your corporate presentations, your payroll, your organization chart, et cetera. So all of that, you need to be careful. Don’t want to leave the door open without control. So access rights, basically what we’re saying is, get the data, structure data. Your data will not be ready for AI until you do that. Get the data access rights for unstructured data. The other risk that we need to manage is data interpretation. Now we’ve all heard about AI hallucinations. Yes? AI hallucinations. Basically when it interprets things incorrectly. So large language models can sometimes get it wrong. Sometimes they can get it dramatically wrong. Now I’ll give you a simple example. Actually it might not be a large language model example, but it just shows how AI can be wrong. You see these pictures? These are pictures of what? Oh, bagels, right? But within the bagels, what else do we have there? We have dogs, right? AI doesn’t see that. You know, that’s something that wasn’t, it classified them all as being bagels. Another one, muffins, right? You see the dogs there? OK, I’m sure you do, because you’re human, right? Because AI builds up from the details. Humans fill in the missing details with their experience. So we need to be careful with what the AI gives of misinterpretation. And if we rely on it blindly, then we can really go astray. The second aspect of data, the risk of readiness, is that we need to guide the model. Our context. And that is basically two things, semantics and fine tuning, right? Semantics is basically where you tell it, what does revenue mean? So remember, if we talk about, you know, ChatGPT has the knowledge from the world. So it knows what revenue means in general. It doesn’t know what revenue means for my organization, right? A good example, a client of mine, you know, they were basically, they provide citizen services. But they provide citizen services within a specific jurisdiction, right? And they were trialing this generative AI chatbot with the citizens, where basically the citizen would come in, ask for the service, but they weren’t entitled for it, right? So the AI had to know that this person doesn’t live within that jurisdiction of services, and it had to tell them, I’m sorry, you’re not a resident of this particular county. Now, it didn’t do that. It was actually offering them the service. And that’s a problem, because the semantics weren’t actually done in the way that told them what it means by a citizen, right? The citizen of this particular service. So that’s the semantics. You need to work a lot on your data dictionary, and you need to fine tune the model. That’s true. So we talked about generative AI not needing the supervised learning. Well, I wasn’t 100% accurate when I said that. Generally, it doesn’t. But then when you want it to be useful for a particular use case, you need to fine tune the model. So that’s the other aspect of AI readiness, which is semantics and fine tuning. So when I talked about the housekeeper and we can be lazy, all right, I was only joking. Data management actually continues to be a necessary practice for taming the generative AI. That’s absolutely necessary. In fact, it’s even more important today. But perhaps we’re focusing on some of the less laborious efforts of structuring the data and more on the contextual efforts in what the data means. So we said there’s a new AI paradigm for enterprises. The data is unstructured. The data is no longer centralized. And the other thing is that applications are no longer centralized. So think of today, Gartner estimates that application providers, your software companies, only 5% of them today have a software provider. They have embedded AI in their software, only 5%. Now, in 2026, we believe 80% of all software providers will have a form of embedded AI. Now, 2026 is just around the corner, right? So that’s going to happen very soon, meaning that the AI you will leverage and utilize is not just the AI that you built. It’s actually the AI that will come to you with your software, right? And again, that’s good news, but it could be bad news. Remember when we talked about, who are you going to let into your messy room? So actually, this is a fact from Gartner. the Magic Quadrant, and this is one of the more recent Magic Quadrants that we have, which we’ve built specifically for the generative AI emerging markets for knowledge management applications. As you can see there, look at the number of players there, a lot of them would be familiar to you, all in a race to add AI features and functionality into their software, right? And this Magic Quadrant, we update every quarter. Typically, we update Magic Quadrants every year. For this one, we update it every quarter because the pace is phenomenal, right? So it changes from quarter to quarter. So that’s what’s happening. It’s a reality, you’re gonna get embedded AI, not just the AI that you build. And then, there’s another phenomenon which is even more dangerous. It’s what we call, bring your own AI, right? Right? Remember, bring your own device? Now it’s bring your own AI, because you know what? You’ve got your HR folks who say, we have this nice tool, our colleagues in company X are using it, and it’s fantastic. It makes our lives so much easier. We don’t need to read all the CVs or whatever. You’ve got your marketing folks already using so many stuff that’s creating their artifacts for them, and they never even ask for permission, right? So there’s this phenomenon of bring your own AI that is being progressively introduced to our organizations. And so if we look at the landscape, the evolution of the AI tech stack, this is a classic, this is the classic AI tech stack. Remember when I was showing you the diagram of the data warehouse, et cetera? So this is what you used to have, yeah? All data was centralized and structured. You’ve got an AI platform that you built, right? You’ve got your built AI. And then you serve different functions in your organization, right? How’s that changing? First thing, the data is. centralized, we have some data centralized, like you know, we talked about our policies, our customer records, et cetera. Yeah, that’s cool, we have it, it’s centralized. But now the data is coming from everywhere and every kind. You know, we talked about bring your own AI, talked about the embedded AI. And you’re going to have your AI platform, you’re going to build a lot of blended AI at the moment. Meaning that, for example, you can learn from open AI or from Microsoft and leverage them within an application of yours in order to not to reinvent the wheel, right? So you’ve got the blended AI, but on the top, you’ve also got the embedded AI where you have no control whatsoever in terms of what the AI does. It’s embedded in the software. And you’ve got your bring your own AI efforts that are completely wild and out of control. And in order for us to make sure that it doesn’t get wild and out of control, here comes this middle layer, the trust, risk, and security management. So we alluded to that when we were talking about semantics, when we were talking about access rights. So that’s extremely important. But it’s a conceptual layer there that every organization will need to build in order to mitigate the risk of generative AI. And then in the middle, on top of that, you’re going to have to have some governance, some actual committees. So you’re going to have a central AI committee that looks at, what are we going to allow in the organization, and what can we not permit? You’re going to have communities of practice where people are exchanging knowledge and experiences about their AI. And you’re going to have the trust, risk, security, and oversight. This, my friends, is what Gartner calls the technology sandwich, right? So this is our technology, AI technology sandwich that basically describes how. the AI landscape is evolving. And in fact, it’s a paradigm shift in how it has existed in the past years. And so we invite every company, every organization to really understand what Sandwich means for their organization. And look at what do they need to introduce. And it’s very much a learning curve. So I don’t think any organization we’ve seen has actually figured it out. This is a conceptual framework, and we need to make sure that we’re learning how to apply it. And so, let me conclude. First point, I need to emphasize that we are at the cusp of an AI revolution. And it’s triggered by the AI that was started by ChatGPT, but it’s not going to end there. The other take out is that at the individual level, we’re already feeling the impact. It’s making us much more productive. Each one of us is using it in different ways. And really, suddenly, I see emails that are so proficient that were maybe one year ago, were very different. So that is a reality. For enterprises, it will take longer than individuals because of the risks and the challenges of actually safely introducing AI. And in order to introduce AI safely, you need practices. So that’s a key input. And which basically two of them, access rights and fine tuning and semantics. And for IT leaders, technology leaders, you need to be prepared that you will not have everything centralized and fully under control. You will have to accept that there will be an ecosystem around you, but you just need to put the guardrails around it and not actually own every aspect and every piece of AI in your organization. And that basically means that you need to prepare and customize your own. technology sandwich. Bon appétit. Thank you. So thank you very much for your time. Please take some time to fill the survey of what you think of this session. So the QR code will take you to a landing page, and you’re going to see the title of the presentation. We have a question, please.
Audience: What are the, I would say, successful stories that you’ve had to apply to the max level and what were their experiences?
Alaa Zaher: OK. You’ve already asked your question. I’ll just summarize. So Amal, right? Amal. Amal. So Amal was asking, when it comes to the technology sandwich, what experiences have we seen in terms of fulfilling it successfully, right? Well, it’s a tricky question, because like I said, technology sandwich is a concept we just came up with a month ago. But if you break it down into its components, what we’re seeing are organizations that are fulfilling bits and parts, bits and pieces of it. So we’re seeing organizations that are actually introducing very strong security management practices. We’re seeing organizations that have committees for governance. We’re seeing organizations that are introducing data management and really harnessing, trying things out. So I was telling you about this example of the organization that was serving its citizens with this pilot chatbot. Interestingly enough, another instance where it went wrong was when somebody said, I’m unhappy with the service, right, and the chatbot was responding to them, the degenerative AI model, it said, OK, if you’re unhappy, you can escalate to the office of the minister, right? So this is. that you would never get your call center agent asking you to escalate to the office of the visitor. They should be proposing some solutions. And so what they learned on the back of that exercise is that they really need to double down on the semantics and the fine tuning. So there we see a lot of organizations that are now, that have actually made those trials and they’re learning how to master the art of fine tuning because it’s not easy. You need to look at all the consequences, all the possibilities and feed the model back with the learnings. So it’s an evolving landscape. And I think we’re all in that journey to learn together about it. Thank you very much for your question. Any other questions? Yes, please. Can you pass the mic?
Audience: So, oh yeah, so it is a nice presentation and I really like the technology sandwich thing you showed.
Alaa Zaher: I can hear you. Can you turn the volume up? Okay, because that goes straight to the headset. Oh, nevermind, I’ll just come closer. Everybody else can hear, it’s just me.
Audience: Hopefully. Yeah, so I really like the technology sandwich bit. How do you think these big AI labs like Google DeepMind and OpenAI, they have certain frameworks. So I think OpenAI has their preparedness framework and DeepMind has the frontier safety frameworks. What would you like those labs to do in the line of your technology sandwich to make safer AI and so on?
Alaa Zaher: Thank you, that’s an excellent question. So the question is about the big tech giants, the people who actually produce the generative AI. You remember we said most of us will not create generative AI, we’ll just leverage it, right? We’ll just leverage it from Google, from Amazon, from OpenAI, et cetera. Now, in Gartner, we also talk about two AI races, right? There’s the tech vendor race, the Googles of the world, and there is end user. The tech vendor race is an accelerated race, as we saw in those embedded AI functionality. So they’re going full on, wanting to capture land and be first. For us, in our organizations, we can take our time and slow down. And especially if our industry is not being disrupted by AI. So we’re still kind of very much, most organizations are in this improvement of productivity. So there’s no sense of urgency in terms of, I need to do this very quickly. So my advice is that, as an organization, if you’re not being disrupted, then maybe you have the leverage to actually start installing those practices and looking at what the vendors are providing and deciding safely what matters to you. Now for them, obviously, they’re gonna push. You know, I’ve had customers where they’ve deployed Microsoft Copilot, OpenAI on Azure, and they came back with huge bill shocks to start with, right? The cost of the tokens is incredible, right? And so we just need to slow down. We don’t, we should not be following the vendors because they will try to sell us as much as we can, as they can. And the business case for generative AI is still very much under development, yeah? So what you spend is not necessarily gonna give you an immediate return. So we say there’s a steady pace. For most organizations, it’s the steady pace. For other organizations, it might be an accelerated pace, but then there’ll have to be some, yeah? I hope, all right. Thank you very much. We’ve got two more questions in five minutes. I’ll take this one first.
Audience: From your experience with different, from your experience with different customers, it is expected that to increase the generative AI with enterprise and a lot of entities will start to develop their own generative AI models to protect their data. Or it is expected to dominate from the big guys.
Alaa Zaher: Yeah. Again, a brilliant question. So Mohamed is asking whether we should, are we seeing organizations developing their own generative AI large language models? Not necessarily large language models, but are we building it in-house rather than using it directly from a provider? For example, you can use open source large language models on Hugging Face, et cetera, or Lama, for example. So we’re seeing organizations leverage those open source models. Why? Because they want to host them internally. They don’t want them to be on the cloud. But that then requires a lot of skill in terms of being able to leverage that model internally in-house. So there’s more effort there. And also, less maintainability. You’ll have to take care of it, just like any open source piece of software. So you’re going to own it. And so you’re going to have to have the skill sets to maintain it in the future. We’re seeing that being a driver for many organizations that don’t want to be exposed. So they get the large language model. It’s hosted. And then they need to invest in GPUs. That’s another limitation. So they need to actually start investing. When I talked about cloud, it takes away the hassle and the investment in your infrastructure. Well, you’re going to have to invest in it if you’re going to host it internally. So really, I think it’s a trade-off. We’re seeing some organizations, typically those that have good software engineering capability, they tend to go down that route. They want to try things out for themselves. But many of the organizations that are typically dependent on third parties and outsource, very difficult for them to do that. So they just go down the route of third parties. And one final question from you, please.
Audience: Thank you very much for the presentation.
Alaa Zaher: Very nice. I’ll have to come closer.
Audience: Yes. My name is Martina Legal-Malakova. I am from GAIAxApp Slovakia, which is focusing on data spaces and data sharing. And I have a question to your presentation, because on the slide, data analytics and NI landscape, you put data sharing as a social initiative. Why?
Alaa Zaher: Yeah. OK. Well, thank you, thank you for that. Yeah, so your question is on the slide where we had the ecosystem of data and analytics, you said data sharing part of the social. Yes, so many organizations are looking to leverage some data assets and some data components that they have to the benefit of external parties.
Audience: Yes, but I ask you, for example, for example, as I am focusing on data spaces, but the most important data spaces is for the… Sorry. For the most important data sharing is for the, for example, manufacturing sector, energy sector, circular economy, and this is why I asked you the question, why you put on this social initiative? Right. It is really business initiatives.
Alaa Zaher: It could be a mix of business and social. I’ll give you an example. When I worked for a telecoms company, like I said, we sat on vast amounts of data, and the data was about, a big part of it was about consumer behavior, right? We knew where everybody lived, where they go, who they called, and we created models to basically profile consumers. And that model could be interesting, in the same way that the social media companies do, like Facebook, they do targeted advertising. you you you you you you you you you you you you you you
Alaa Zaher
Speech speed
154 words per minute
Speech length
8295 words
Speech time
3212 seconds
AI has progressed from traditional machine learning to generative AI
Explanation
Alaa Zaher discusses the evolution of AI from traditional machine learning techniques to more advanced generative AI models. This progression represents a significant leap in AI capabilities and applications.
Evidence
The speaker mentions the transition from supervised learning models like image classification to large language models like ChatGPT.
Major Discussion Point
Evolution and Impact of AI
Agreed with
Agreed on
AI has evolved significantly and is becoming more accessible
Generative AI like ChatGPT has created a revolutionary milestone in AI capabilities
Explanation
Alaa Zaher emphasizes the revolutionary impact of generative AI, particularly models like ChatGPT. These models represent a significant advancement in AI’s ability to generate human-like text and perform complex tasks.
Evidence
The speaker demonstrates ChatGPT’s capabilities by showing its ability to summarize the story of Cinderella in different word counts and generate poetry in Arabic.
Major Discussion Point
Evolution and Impact of AI
Agreed with
Agreed on
AI has evolved significantly and is becoming more accessible
AI is becoming ubiquitous and accessible at the consumer/individual level
Explanation
Alaa Zaher argues that AI, especially generative AI, is becoming widely available and accessible to individual users. This democratization of AI technology is changing how people interact with and utilize AI in their daily lives.
Evidence
The speaker mentions examples of individuals using AI for writing, PowerPoint creation, and learning.
Major Discussion Point
Evolution and Impact of AI
Agreed with
Agreed on
AI has evolved significantly and is becoming more accessible
74% of CEOs believe generative AI will have a profound impact on their industry
Explanation
Alaa Zaher presents survey data showing a significant increase in CEOs’ belief in AI’s impact on their industries. This statistic indicates a growing recognition of AI’s potential to transform various sectors.
Evidence
The speaker cites a Gartner survey showing an increase from 20% to 74% of CEOs believing in AI’s significant impact on their industry from previous years to 2024.
Major Discussion Point
Evolution and Impact of AI
Traditional machine learning required structured, centralized data
Explanation
Alaa Zaher explains that traditional machine learning approaches relied heavily on structured and centralized data. This approach required significant effort in data preparation and management.
Evidence
The speaker describes the traditional data warehouse model with ETL (extract, transform, load) processes for structuring data.
Major Discussion Point
Data Requirements for AI
Agreed with
Agreed on
Data management remains crucial for AI implementation
Generative AI can work with unstructured data from various sources
Explanation
Alaa Zaher highlights that generative AI models can effectively utilize unstructured data from diverse sources. This capability represents a significant shift in how AI can process and learn from information.
Evidence
The speaker provides examples of generative AI working with audio files, emails, PDF documents, and code repositories.
Major Discussion Point
Data Requirements for AI
Agreed with
Agreed on
Data management remains crucial for AI implementation
Data management and governance are still necessary for effective AI use
Explanation
Alaa Zaher emphasizes that despite advancements in AI’s ability to work with unstructured data, organizations still need robust data management and governance practices. These practices are crucial for ensuring the responsible and effective use of AI.
Evidence
The speaker introduces the concept of a ‘technology sandwich’ that includes layers for trust, risk, and security management in AI implementations.
Major Discussion Point
Data Requirements for AI
Agreed with
Agreed on
Data management remains crucial for AI implementation
Organizations need to focus on data semantics and fine-tuning for AI
Explanation
Alaa Zaher argues that organizations must pay attention to data semantics and model fine-tuning to ensure AI systems understand and operate within the specific context of their business. This is crucial for accurate and relevant AI outputs.
Evidence
The speaker provides an example of a chatbot misunderstanding the context of citizen services, highlighting the need for proper semantics and fine-tuning.
Major Discussion Point
Data Requirements for AI
Agreed with
Agreed on
Data management remains crucial for AI implementation
Organizations need to prepare for a new AI paradigm with decentralized applications
Explanation
Alaa Zaher suggests that organizations must adapt to a new AI paradigm where applications are increasingly decentralized. This shift requires a different approach to AI implementation and management within enterprises.
Evidence
The speaker mentions that by 2026, 80% of software providers are expected to have embedded AI in their products, compared to only 5% currently.
Major Discussion Point
Enterprise AI Readiness
Companies should develop a “technology sandwich” approach to manage AI risks
Explanation
Alaa Zaher introduces the concept of a ‘technology sandwich’ as a framework for managing AI risks in organizations. This approach involves layering various components of AI implementation, including data, applications, and governance.
Evidence
The speaker describes the technology sandwich model, which includes layers for data, AI platforms, trust and risk management, and governance.
Major Discussion Point
Enterprise AI Readiness
There’s a trade-off between using third-party AI services and developing in-house capabilities
Explanation
Alaa Zaher discusses the decision organizations face between using external AI services and developing their own AI capabilities. This trade-off involves considerations of control, cost, and expertise.
Evidence
The speaker mentions that organizations with strong software engineering capabilities might prefer to host AI models internally, while others may rely on third-party services.
Major Discussion Point
Enterprise AI Readiness
Organizations face challenges in safely introducing AI due to risks
Explanation
Alaa Zaher highlights the challenges organizations face when implementing AI, particularly regarding safety and risk management. These challenges necessitate careful consideration and planning in AI adoption.
Evidence
The speaker mentions the need for access rights management and the risks associated with AI misinterpretation and hallucinations.
Major Discussion Point
Challenges and Considerations for AI Adoption
Cost considerations are important when deploying AI solutions
Explanation
Alaa Zaher emphasizes the importance of considering costs when implementing AI solutions. The expenses associated with AI deployment can be significant and need to be factored into decision-making.
Evidence
The speaker mentions examples of organizations facing ‘huge bill shocks’ when deploying AI solutions like Microsoft Copilot.
Major Discussion Point
Challenges and Considerations for AI Adoption
Audience
Speech speed
148 words per minute
Speech length
273 words
Speech time
110 seconds
Successful AI implementation requires strong security practices and governance
Explanation
An audience member highlights the importance of robust security practices and governance in successful AI implementation. This point underscores the need for organizations to have proper safeguards and oversight in place when adopting AI technologies.
Major Discussion Point
Enterprise AI Readiness
There’s a need to balance the push from AI vendors with organizational readiness
Explanation
An audience member raises the point about balancing the aggressive marketing from AI vendors with an organization’s actual readiness to adopt AI. This suggests that organizations should carefully assess their capabilities and needs before rushing into AI adoption.
Major Discussion Point
Challenges and Considerations for AI Adoption
Data sharing for AI has both business and social implications
Explanation
An audience member questions the categorization of data sharing as a social initiative, pointing out that it has significant business implications as well. This highlights the dual nature of data sharing in AI, affecting both social and economic spheres.
Evidence
The audience member mentions examples of data sharing in manufacturing, energy, and circular economy sectors.
Major Discussion Point
Challenges and Considerations for AI Adoption
Agreements
Agreement Points
AI has evolved significantly and is becoming more accessible
speakers
Alaa Zaher
arguments
AI has progressed from traditional machine learning to generative AI
Generative AI like ChatGPT has created a revolutionary milestone in AI capabilities
AI is becoming ubiquitous and accessible at the consumer/individual level
summary
There is a consensus that AI has evolved from traditional machine learning to more advanced generative AI, creating a revolutionary milestone in capabilities and becoming more accessible to individuals and consumers.
Data management remains crucial for AI implementation
speakers
Alaa Zaher
arguments
Traditional machine learning required structured, centralized data
Generative AI can work with unstructured data from various sources
Data management and governance are still necessary for effective AI use
Organizations need to focus on data semantics and fine-tuning for AI
summary
While AI has evolved to work with unstructured data, there is agreement that proper data management, governance, semantics, and fine-tuning remain crucial for effective AI implementation.
Similar Viewpoints
Organizations need to adapt to a new AI paradigm by implementing strong security practices, governance, and risk management approaches like the ‘technology sandwich’ model.
speakers
Alaa Zaher
Audience
arguments
Organizations need to prepare for a new AI paradigm with decentralized applications
Companies should develop a ‘technology sandwich’ approach to manage AI risks
Successful AI implementation requires strong security practices and governance
Unexpected Consensus
Balancing AI vendor push with organizational readiness
speakers
Alaa Zaher
Audience
arguments
There’s a trade-off between using third-party AI services and developing in-house capabilities
There’s a need to balance the push from AI vendors with organizational readiness
explanation
Both the speaker and audience unexpectedly agreed on the need for organizations to carefully balance the aggressive marketing from AI vendors with their actual readiness and capabilities for AI adoption. This consensus highlights the importance of thoughtful and measured AI implementation strategies.
Overall Assessment
Summary
The main areas of agreement include the significant evolution and increasing accessibility of AI, the continued importance of data management in AI implementation, and the need for organizations to adapt to a new AI paradigm with proper security and governance measures.
Consensus level
There is a moderate level of consensus among the speakers, primarily focused on the technical aspects and organizational challenges of AI adoption. This consensus implies a shared understanding of the current state and future direction of AI in enterprises, which could lead to more focused discussions on implementation strategies and risk management in AI adoption.
Differences
Different Viewpoints
Unexpected Differences
Overall Assessment
summary
The main areas of subtle disagreement or different emphasis were on the implications of data sharing and the specific focus areas for AI governance and security.
difference_level
The level of disagreement was minimal, with most differences being in emphasis rather than fundamental disagreement. This suggests a general consensus on the importance and challenges of AI implementation, with slight variations in focus areas based on individual perspectives and experiences.
Partial Agreements
Partial Agreements
Both Alaa Zaher and the audience member agree on the importance of governance and security practices in AI implementation. However, Zaher focuses more on data management aspects, while the audience member emphasizes overall security practices.
speakers
Alaa Zaher
Audience
arguments
Alaa Zaher emphasizes that despite advancements in AI’s ability to work with unstructured data, organizations still need robust data management and governance practices. These practices are crucial for ensuring the responsible and effective use of AI.
An audience member highlights the importance of robust security practices and governance in successful AI implementation. This point underscores the need for organizations to have proper safeguards and oversight in place when adopting AI technologies.
Similar Viewpoints
Organizations need to adapt to a new AI paradigm by implementing strong security practices, governance, and risk management approaches like the ‘technology sandwich’ model.
speakers
Alaa Zaher
Audience
arguments
Organizations need to prepare for a new AI paradigm with decentralized applications
Companies should develop a ‘technology sandwich’ approach to manage AI risks
Successful AI implementation requires strong security practices and governance
Takeaways
Key Takeaways
AI has evolved from traditional machine learning to more advanced generative AI capabilities
Generative AI is becoming ubiquitous and accessible at the individual/consumer level
74% of CEOs believe generative AI will have a profound impact on their industry
Organizations need to prepare for a new AI paradigm with decentralized data and applications
Data management and governance remain crucial for effective AI implementation
Companies should develop a ‘technology sandwich’ approach to manage AI risks and implementation
There’s a trade-off between using third-party AI services and developing in-house capabilities
Resolutions and Action Items
Organizations should focus on data semantics and fine-tuning for AI implementation
Companies need to install strong security practices and governance for AI adoption
Enterprises should take a measured approach to AI adoption if their industry is not being disrupted
Unresolved Issues
The full extent of generative AI’s impact on various industries
Best practices for balancing the push from AI vendors with organizational readiness
Optimal strategies for cost management when deploying AI solutions
The role and implications of data sharing in AI development across different sectors
Suggested Compromises
Organizations can leverage open-source AI models to host internally while balancing the need for specialized skills and infrastructure investment
Companies can adopt a steady pace for AI implementation instead of rushing to match the accelerated pace of tech vendors
Thought Provoking Comments
Any sufficiently advanced technology is indistinguishable from magic
speaker
Alaa Zaher (quoting Arthur C. Clarke)
reason
This quote sets the stage for discussing AI as a revolutionary technology that seems magical to many people. It frames the subsequent discussion of AI capabilities in an intriguing way.
impact
It led to examples of AI capabilities that seem magical, like ChatGPT’s ability to summarize stories or generate poetry in different languages. This framed AI as something extraordinary and captured the audience’s attention.
No data, no AI, sounds like a song from Bob Marley, right?
speaker
Alaa Zaher
reason
This catchy phrase emphasizes the critical importance of data for AI in a memorable way. It distills a complex concept into a simple, relatable idea.
impact
It transitioned the discussion into the importance of data for AI systems, leading to an explanation of different data types and sources needed for various AI applications.
AI is becoming everyone’s business. It’s becoming accessible to everyone.
speaker
Alaa Zaher
reason
This statement highlights a key shift in AI adoption and accessibility, moving from specialized applications to widespread use.
impact
It shifted the conversation to discuss how individuals and organizations are using AI tools in their daily work, emphasizing the democratization of AI technology.
Every two and a half days a new foundation model is created.
speaker
Alaa Zaher
reason
This statistic vividly illustrates the rapid pace of AI development and the intense competition in the field.
impact
It underscored the urgency for organizations to consider their AI readiness and strategy, leading to a discussion about CEO perceptions of AI’s impact on their industries.
Remember when we talked about, who are you going to let into your messy room?
speaker
Alaa Zaher
reason
This metaphor effectively communicates the risks associated with giving AI systems access to unstructured organizational data.
impact
It led to a discussion about the importance of data management, access rights, and security considerations when implementing AI systems in organizations.
Overall Assessment
These key comments shaped the discussion by guiding it through several important aspects of AI adoption and implementation. They moved from the initial ‘wow factor’ of AI capabilities to practical considerations of data requirements, accessibility, rapid development, and security concerns. The speaker used relatable metaphors and striking statistics to make complex concepts more digestible, which likely helped maintain audience engagement throughout the presentation. The comments also facilitated a progression from general AI concepts to specific organizational challenges and strategies, providing a comprehensive overview of the AI landscape for enterprises.
Follow-up Questions
What are the successful stories of organizations applying the technology sandwich concept?
speaker
Audience member (Amal)
explanation
This question seeks to understand real-world implementations and experiences with the newly introduced technology sandwich concept, which could provide valuable insights for other organizations.
What should big AI labs like Google DeepMind and OpenAI do in line with the technology sandwich concept to make safer AI?
speaker
Audience member
explanation
This question explores how major AI developers can contribute to safer AI development and implementation, which is crucial for the responsible advancement of AI technology.
Is it expected that enterprises will develop their own generative AI models to protect their data, or will the big tech companies dominate?
speaker
Audience member (Mohamed)
explanation
This question addresses the future direction of generative AI development in enterprises, which has significant implications for data security and the AI market landscape.
Why is data sharing categorized as a social initiative rather than a business initiative in the data analytics and AI landscape?
speaker
Audience member (Martina Legal-Malakova)
explanation
This question challenges the categorization of data sharing, highlighting the need to clarify the business aspects of data sharing in various sectors.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Related event
Internet Governance Forum 2024
15 Dec 2024 06:30h - 19 Dec 2024 13:30h
Riyadh, Saudi Arabia and online