Life Is But A Stream

Ep 17 - 2026 Predictions: What's Next for Data Streaming and AI

Episode Summary

AI is reshaping everything from who your customers are to how data is governed, secured, stored, and leveraged—powering a new generation of intelligent data strategies. Tune in as industry thought leaders unpack the biggest trends shaping agentic AI, data streaming, and modern data architectures in 2026.

Episode Notes

AI isn’t just evolving—it’s reshaping who your customers are, how systems operate, and what real time really means. From machines making purchase decisions to agents increasing query volume across databases, the realities of 2026  are forcing leaders to rethink data architecture and governance strategies at a fundamental level. 

In this episode, Joseph is joined by Will LaForest (Field CTO, Confluent), Adi Polak (Director of Developer Advocacy & Experience, Confluent), and independent analyst, Sanjeev Mohan, to break down critical insights from  Confluent’s 2026 Predictions Report. 

Together, they explore how agentic AI will transform digital commerce, why Model Context Protocol  is quickly becoming table stakes, and how context engineering is emerging as the next major unlock for AI systems. The conversation also dives into the acceleration of AI-enabled cybercrime, why enterprises can no longer dismiss data governance, how Apache Iceberg™ is quickly becoming the standard in cold data management, and much more.

If you’re navigating AI and data streaming, this episode offers a grounded, opinionated take on what’s coming next—and how technologies like Apache Kafka® and Apache Flink® will shape what you need to do now. 

About the Guests:
Will LaForest
Will is Field CTO for Confluent. Will works with customers across a broad spectrum of industries and government, enabling them to realize the benefits of a data in motion architecture with event streaming. He is  passionate about data technology innovation and has spent 26 years helping customers wrangle data at massive scale.  His technical career spans diverse areas from software engineering, NoSQL, data science, cloud computing, machine learning, and building statistical visualization software but began with code slinging at DARPA as a teenager. Will holds degrees in mathematics and physics from the University of Virginia. 

Adi Polak
Adi is the Director of Advocacy and Developer Experience Engineering at Confluent. For most of her professional life, she has worked with data and machine learning for operations and analytics. As a data practitioner, she developed algorithms to solve real-world problems using machine learning techniques and expertise in Apache Spark, Kafka, HDFS, and distributed large-scale systems.

Adi has taught how to scale machine learning systems to thousands of practitioners, and is the author of successful books: Scaling Machine Learning with Spark and High Performance Spark 2nd Edition.

Sanjeev Mohan
Sanjeev is a recognized thought leader in cloud technologies, modern data architectures, analytics, and artificial intelligence. With a keen focus on emerging trends and technologies, Sanjeev hosts It Depends podcast and authors regular Medium blogs. He is also the author of Data Product for Dummies.

Formerly a Vice President at Gartner, Sanjeev was renowned for his in-depth research and strategic insights, shaping the research agenda for data and analytics globally. Over the past three years, he has led SanjMo, a consultancy specializing in technical advisory services that elevate category and brand awareness for clients.

Guest Highlights:
“There was already an asymmetric relationship between attackers and defenders. This just increases that, because it’s easier to use AI to attack than it is to use it to defend.” — Will LaForest

“We’re about to accidentally DDoS our software and databases, because we’re giving a tool to automate things without thinking if our systems are actually ready for it.” — Adi Polak

“Your proprietary data is your only moat. Everything else—models, performance, tooling—will keep changing.” — Sanjeev Mohan

Episode Timestamps: 
03:40 – Predictions Report Overview
05:05 – Prediction 1: The Rise of Agentic Commerce: Machines Are Your New Customers
12:45 – Prediction 2: Leading Platforms Will Offer Model Context Protocol
18:00 – Prediction 3: Context Engineering Is the Next AI Unlock
24:15 – Prediction 4: AI Will Apply Increased Pressure to Existing Databases
30:10 – Prediction 5: AI Will Drive Cyber Crime to Unprecedented Levels
36:15 – Prediction 6: AI Will Accelerate Enterprise Investment in Data Governance
43:25 – Prediction 7: Apache Iceberg™ Will Become the Standard for Cost-Effective Cold Data Management
49:20 – Prediction 8: Your AI Strategy Will Need an Independent Data Plane to Avoid Overcommitting
55:00 – Prediction 9: Early Adopters of Durable Execution Engines Will Gain a Competitive AI Edge
1:00:01 – Prediction 10: Improvements in Generative AI Will Help Businesses Finally Address Legacy Tech Debt

Dive Deeper into Data Streaming:

Links & Resources:

Our Sponsor:  
Your data shouldn’t be a problem to manage. It should be your superpower. The Confluent data streaming platform transforms organizations with trustworthy, real-time data that seamlessly spans your entire environment and powers innovation across every use case. Create smarter, deploy faster, and maximize efficiency with a true data streaming platform from the pioneers in data streaming. Learn more at confluent.io.

Episode Transcription

0:00:00.2 Joseph Morais: Welcome back to Life Is But A Stream. Today's episode is a special one. We're diving into Confluent's 2026 predictions report, which looks at how agentic AI, data architectures, and the data streaming ecosystem will evolve in the year ahead. Joining me today in this fantastic forum are three individuals that represent the community, the industry, and our customers here at Confluent. So Adi, starting with you, Adi Pollak, our director of advocacy and Developer Experience, why don't you tell the audience a little bit more about yourself and what you do here at Confluent?

0:00:35.1 Adi Pollak: I'm responsible for the Developer Experience engineering team and the Developer Advocacy team. The Developer Experience team builds tutorials, they work on CP all in one if you ever used it, some of our SDK and some of our connectors, and they really care about developer experience and how to reduce and remove friction for everyone using us, also in the open source and those are things that are very specific to our platform in our cloud. The second part is the developer advocacy that helps educate and entertain the industry on our latest and greatest capabilities on data streaming, AI, now context engine too, and also works with customers to bring kind of in-house that education in-house into customers for them to build great solution as well in this space.

0:01:21.8 Joseph Morais: You know, when I first heard the term developer advocate, I had this like totally wrong view of it. I imagined somebody like picketing on the corner being like, "Hey, developers are cool, be nice to developers." But that's not what you guys do at all. So I appreciate that.

0:01:34.4 Will LaForest: There's a little bit of that.

0:01:35.9 Sanjeev Mohan: I think next time Adi does the keynote at Confluent Current, she should do that. Developer should run up and down.

0:01:43.6 Joseph Morais: Sanjeev, perfect timing. So tell me more, what does the independent analyst do?

0:01:48.5 Sanjeev Mohan: Oh, good question. If you find out, please let me know. No, I'm just kidding. My job every day is to make sense of this head-spinning technology. It's moving so fast. Every day I wake up, something new has happened overnight. And so my job is to talk to vendors, talk to end users, connect the dots, and then create these deliverables, either my written blogs or my podcast, and advise my clients on how do you navigate this fast, voracious world of technology that's changing so fast. And to be honest, I get paid to do this. End users don't. They have a day job. You guys have a day job, right? I'm an advisor and analyst who's understanding the patterns, seeing where this technology is going and helping my clients make sense of it.

0:02:48.3 Joseph Morais: Now, over to Will LaForest. Now, Will, before you tell us what you do, again, I was very ignorant when I started at Confluent. I heard field CTO and I thought to myself, someone that is a field CTO must work for some type of sports organization that has some type of field that has an inordinate amount of IT infrastructure and they need a CTO for that. Tell the audience what you actually do here at Confluent.

0:03:08.9 Will LaForest: So it's basically a really fancy title that means that I spend all my time working with customers. That's what it means. So like, I'm not tied in specific industries. I talk, you know, work with our customers in all industries across the globe. But fundamentally, it's just helping them solve their data infrastructure problems. Oftentimes that's related to data streaming, of course, because, you know, I work at Confluent. But it's not always that way. Like, sometimes we're just a small piece of the puzzle because I've been kicking around the data space for, I don't know, just over 30 years now. So, I'm not going to lie, I love it.

0:03:43.8 Joseph Morais: So tell us a bit more. I know you work very closely on the predictions report. Can you tell the audience a bit more about it before we get into the predictions?

0:03:51.1 Will LaForest: I think people love predictions. We decided to sort of, how could we sort of source a pretty interesting set of predictions for 2026? And my team, so there's myself and we have a couple other field CTOs have similar roles and then the technology strategy group that's really focused, I guess, more on an organic level. We put our heads together because between us, like each... For instance, I probably do a few hundred customer meetings a year. And then on top of that, we talk to analysts, we sort of cross-reference it. And so we put this all in a blender and then figured out which ones were kind of interesting and that's how we... And which ones we saw the most of, what we thought were the most important ones that people would be interested in. And that's how we constructed these predictions, if you will.

0:04:43.5 Joseph Morais: Awesome. Well, if anyone disagrees with these predictions, please feel free to spam Will's LinkedIn. He's ready for it. So the way the format of today for today's forum is going to go is I'm going to talk about a prediction, and we're going to go prediction by prediction, then we're going to go round table, starting with a different person each time to get their opinion. Do you agree? Do you disagree? Do you have any insights on it? So the first prediction we have is the rise of agentic commerce. Machines are your new customers. So this idea is that in the future, in 2026 and beyond, agents will be making buying decisions. They'll be doing the product comparisons. Think about like insurance policies or even SaaS solutions. It's not going to be the human that's going to be doing the buying decision or at least they're not going to be doing the ones doing the research. Machines as customers. Adi, let's start with you. What are your thoughts on that?

0:05:34.3 Adi Pollak: Yes, so as I was trying to process the report, which I think is fantastic, essentially the first prediction is saying we're dismantling the shop window of the internet. For the last 30 years, maybe even more, we kind of took digital transformation as like taking the physical stores and putting them on screens, right? So digital shopping carts, digital checkout counters, and we designed everything around that. And what we're saying right now with this prediction essentially is everything related to the graphical user interface is over, is dead. And how we're shifting, we're shifting into a world where agent safe APIs is kind of like a way of telling us users. It's like, don't show me that pretty website, just give me a raw data pipe, if you will. And I think this is a profound shift and a profound prediction that is very interesting. And definitely I see it more and more with different tools that enables us as humans to shop online or give an agent what we're looking for and find us the best price, the best product, the highest rated product and so on. But essentially from a company's point of view, it means that now they have to start building more infrastructure around data pipelines.

0:06:52.3 Adi Pollak: So for example, the classic product catalog might die at the end. It's like a static list of product. It becomes useless to AI because what it needs, it needs the real time stream of what is existing right now, like the stock ticker, like how many shoes or lumber, whatever you want to buy there. But that becomes really important and also the fluidity of prices because if machines are now buying and selling instantly, the price should also have an impact and fluctuate in second by second base, not seasonality like we have right now around the holidays or anything like that, but it's actually going to impact how we're doing prices for e-commerce on that side. So that we'll need, we'll feel something that is more high frequency trading. So I think this is true. I saw tools. I saw a lot of demos of how people do it, and I know more and more people are going to continuously use it, especially now, the season of holidays, everyone wants to get gifts. Everyone is looking for the best gifts to to give to their loved ones. And everyone are using AI for it. So I definitely agree with that prediction and I think it will make us think differently on what digital transformation really need and how we can build that infrastructure in order to support these new AI customers that are coming in.

0:08:10.1 Joseph Morais: That's a great perspective. And I'm tickled over this idea of like an agent scanning your website and seeing that your CSS is trash and be like, we're not buying from them. There's no way we're doing that. Look at that color schema, no way, no way. So, Sanjeev, over to you. What are your thoughts? Do you agree? Do you think this is the way things are going to roll out in 2026?

0:08:30.0 Sanjeev Mohan: I agree that this prediction is going to come true, but it's a little bit more complicated than is going to turn out to be. The reason I say it's a little bit more complicated is because today a lot of websites, there's been a lot of investment in user interface and things like cross-sell and upsell. For example, let's say I have an agent that reads my email and says, "Oh, look, this guy's meeting, calendar changed. Let me order autonomously an Uber because or Lyft or whatever ride sharing app is my preference." But Uber, if I went to the app, it's trying to lure me into getting Uber Black instead of X.

0:09:13.6 Joseph Morais: Right.

0:09:14.3 Sanjeev Mohan: And so all this kind of stuff certainly gets taken away from the suppliers and I don't see, and in fact, I had this imagination in my mind that when internet shopping became big, a lot of people would actually go to physical stores, try out different things, right. Prices, and then go buy it on the internet. Maybe this time around it'll be similar where you first go to do a Google search and you find out and you do some comparison and then you go into ChatGPT and say, "Okay, buy it for me."

0:09:52.3 Joseph Morais: Right.

0:09:53.1 Sanjeev Mohan: I'm just thinking aloud, but a lot of investment done on the front end that literally just goes to waste if agents are buying things from us and there's going to be some pushback on that.

0:10:03.9 Joseph Morais: Will, let's end with you on this one. What are your opinions and are you seeing enterprises that you work with already talking about things like this?

0:10:12.3 Will LaForest: Yeah, I mean, for sure I'm seeing it. I'm seeing it myself just as a consumer. In the report, I kind of joked about this, but like, I don't know about you guys, but like I'm always frustrated with plungers. They never work. And so finally, I was like, I'm going to get a plunger that works, but I don't want to spend three hours like trying to figure this out. So I just let Perplexity do it for me. Literally. I was like, oh, this is so great because for a purchase that size, no problem. So I think there's going to be some give and take. There's there's probably going to be a human in the loop for for most of these things.

0:10:42.0 Joseph Morais: Yeah.

0:10:42.5 Will LaForest: But in talking to retailers or e-commerce people, they see it's coming, right? The demand is there. OpenAI's got it. Perplexity has it. I think the interesting thing is just as an Adi was sort of alluding to this, just as like retail and commerce had to make a shift when when they moved from brick and mortar to the web, you know, there's going to be a similar transformation here. And it just gets back to like, I think retailers and people that are participating in e-commerce, they're going to have to focus on making better tasting beer. Because it's just like, this is like implementing ACP protocol for doing agentic commerce. All these things is just adding yet another layer of complexity. But companies like Stripe and Shopify and Wix, they've already built this. So I think it's going to really sort of condense. So I did talk to a pretty big retailer that's sort of implementing some of this themselves. But in general, I think it's going to be hard. So I think there's going to be some more consolidation on the actual e-commerce platforms. And I think Adi's point about high-frequency trading is spot on because in terms of the market dynamics at this speed, which of course, at Confluent we don't mind because that means you have to be real time, you have to be fast about these things.

0:11:58.0 Joseph Morais: Right. We're ready for it.

0:12:00.2 Sanjeev Mohan: Why do you need so many plungers, by the way?

0:12:03.9 Joseph Morais: This is... He stole my question.

0:12:05.9 Will LaForest: I got a lot of toilets and a lot of kids.

0:12:10.7 Joseph Morais: Oh man, that is awesome. So, you know, my personal take is, you know, I was thinking about this, the AI is going to do the research, but like you said in the plunger example, if you trust Gemini or whatever you're using, you're going to go with its selection. So even though it's the intermediary, it is making that buying decision, especially if the B2C buyer, B2B buyer, buys into the system that they're querying. So, fascinating. I feel very strongly about that one.

0:12:28.3 Joseph Morais: So, moving on to prediction two, leading platforms will offer model context protocol. So in 2025, the model context protocol became pretty much a standard. And it seems that MCP will be table stakes next year in 2026. Sanjeev, starting with you, do you agree? And what does this mean for the AI ecosystem?

0:12:59.7 Sanjeev Mohan: When Model Context Protocol first came out, I was a little bit hesitant. And the reason for that is because it came out so fast, it was adopted faster than the businesses were ready. For example, even today, security is not very well defined in MCP. So we've jumped the gun in my opinion. But now I'm feeling that it's probably going down the right path. And the reason for that is because as of just a few weeks ago, it's now part of Linux Foundation. So, so with Linux Foundation governing and managing the development of MCP and every major vendor has now signed up to do it, I think MCP will become the de facto standard. It'll be very interesting because all the things that we used to worry about with API, like throttling, rate limiting, all the identity, access management, all those things, I think we will still have to do with MCP server. Although I haven't seen too much. Maybe Adi, you're seeing more of that, but I agree with this prediction.

0:14:11.2 Joseph Morais: Yeah, I agree. I mean, I think with this new world of AI, everyone's looking to latch onto some type of established framework and MCP seems to be a prominent one. Will, I'm curious to hear your perspective. Are you dealing with any customers that are already using MCP as attached to other products and do you think this acceleration will continue?

0:14:27.7 Will LaForest: Yeah, I do. Like to me, this one's easy. I'm seeing it everywhere. Every company feels like they have to get in. It's table stakes. No one wants to miss out on this acceleration of, you know, AI. And so everyone's like, totally there's a lot of problems still to solve. Like they have, you know, you can authenticate, as Sanjeev was pointing out, you can authenticate, but that doesn't do, for instance, third-party authentications, like in terms of I'm doing something on behalf. How do you manage that? There's no standard for that. So there are problems. But yeah, everyone's adopting this. I think this is a no-brainer.

0:15:03.9 Joseph Morais: Adi, let's end with you.

0:15:05.5 Adi Pollak: Yes, I want to contradict it a little bit.

0:15:07.7 Joseph Morais: Okay, I love that.

0:15:08.7 Adi Pollak: To me, it looks like a security nightmare wrapped in a business fantasy, to be honest. Because now we're giving a model the ability to access different interfaces, different APIs, and actually access our databases. And for years, we worked really, really hard on access control and RBAC and a bunch of these things in order to make it our environment secure, govern, tag, have the lineage, know exactly what is happening in our data platform. And it's kind of like we're bringing that essentially protocol that wraps LLM to start accessing this data. And I think, you know, we will see more adoption, probably, in early-stage companies, mid-sized companies, but I still think the large enterprises will look for the right governance, the right lineage mechanism, the right reproducibility, the right tools in order to ensure that these models access what they're allowed to access and also answers and connects to what they're allowed to connect. The huge benefits of MCP is kind of like, you know, if we have multiple systems, agentic systems, or other systems, they can connect to as many database in the world as they want, and it makes it very easy to do these connections.

0:16:24.9 Adi Pollak: So that's the great benefit of that. But at the end of the day, security and governance does matter. And so I think that will be one area that we'll have to solve. And the second thing is we're looking at the biggest companies in the world to develop these models and these protocols. And there is a hidden assumption that all of the big companies in the world want to play nice. And yes, it was donated to, to the Linux Foundation and so on. Um, but technically, the prediction or what it says, it's kind of MCP will allow developers to swap out models whatever they want. And then the question is, does OpenAI, Google, or Anthropic want to be the model to be swapped out?

0:17:09.1 Will LaForest: No, of course not. We're getting to that prediction later on.

0:17:16.8 Adi Pollak: Exactly. You know, if AI models becomes a commodity and MCP becomes a protocol, what companies would do next to get that lock-in?

0:17:26.8 Joseph Morais: Yeah, that's really insightful. And see, I get the benefit of going last, so I get to synthesize my response from all of yours. Yeah, clearly it's not going away. People are attached to it. It's a security nightmare. Like I think about people dealing with, you know, unsecured GitHub repos, like dropping API keys in there and forgetting about that. Now, you know, take that to the level where you just attach something up to MCP without even realizing the things you've unleashed. And now somebody can query your agent or interact with your agent and expose things. Just an absolute nightmare, but I don't think it's going anyway, going away.

0:17:59.0 Joseph Morais: Prediction number three. Context engineering is the next AI unlock. So, if you go to 2024, everyone was talking about RAG, right? Results Augmented Generation. 2025, obviously the year of agentic AI. We believe that 2026 is going to be the year of context engineering. The challenge now is how to efficiently manage the right context within a limited context window. So, Will, starting with you, what are your thoughts?

0:18:29.1 Will LaForest: That is the challenge. Like you can just see it as a consumer using these AI products that like how things deteriorate when there's too much context and how poor some tools are actually using foundational models. And so I think even if large language models get much better or any of these foundational models get much better, like actually the really key thing is to get the context right. And it's actually not easy. It's really, really hard. So I think that, you know my sense is and talking to customers and seeing things like the explosion of turbopuffer, this is going to be a, a big deal for 2026.

0:19:11.9 Joseph Morais: Agreed. Over to you, Adi.

0:19:13.9 Adi Pollak: I think with context engineering, we have a couple of problems. One is tokenization. Models today have a limitation of how many tokens we can send to them. At the end of the day we connect, we take the data and the data that we have, the prompt that we have, the context that we have. We merge it all together or concatenate all of that together into a string and then we send it to the model. So tokenization is going to be a problem. Rate control is a problem already. So how do we distill that context into what is the most important for the model right now? And then in the world of context engineering, we have the memory aspect of the short-term, long-term and then the state management, three critical aspects. And then we need to understand what we are actually saving there and how we are summarizing and compressing that in order for us to give the model just enough information so it can succeed in the task that we're giving it. Now, another challenge is even if the model is able to support a larger tokenization, latency becomes a bottleneck.

0:20:23.2 Adi Pollak: Because the more context you give the model, the more tokens you give the model, the longer it will think until it gives you a result. And we know some systems need to act faster, some systems can have that delay of 10 seconds, 20 seconds, 2 minutes and so on. But it can become an operational, I don't want to say nightmare, but it can become a challenge to understand how to balance the size of the context, the tokens with the cost, with the requirements from the system and so on. We need caching, we need to clean that data, we need to remove stem words, words that are not relevant, doesn't give us anything. So this is another aspect that it's very, very important. And this is more of a data cleaning, data quality challenge that we know in the data. So I think, I think in 2026, probably we're going to do a lot of manual work in order to see something that actually actually works. But I think we'll get there. And models are getting smarter. They're also managing their own context right now. So it's going to be an interesting space to to live at.

0:21:26.5 Joseph Morais: And then let's end with you, Sanjeev. Context engineering, is it all just hype or is it real?

0:21:31.8 Sanjeev Mohan: Well, it is very much real. But there's a lot of work that needs to be done in this space. I think 2026 will literally be focused on this. Going back to the recently concluded AWS re: Invent, every Nova 2 model has a million context, a million token context window. But the problem that we are facing now is other than latency and the cost that Adi mentioned, LLMs start forgetting when you have such a large context. So prioritizing that becomes really important things. So memory management is really important. I'm super interested to see how this space is going to develop. Where are we going to store the context, the state, the the different chain of thoughts, like if model is going through, should I do option A, B, C and trying to calculate which is the next best option. All these things need to be stored in some ephemeral storage and then it's not needed. So that's one area. Another area that I research a lot is a space of data observability.

0:22:58.7 Sanjeev Mohan: That data is extremely critical as a context data that so far it is a silo. We do data observability, we see pipelines health, we maybe we do FinOps, we see data quality. But all that behavioral data is actually very valuable context. How are we going to add this observability data to the the business data and pass it to LLM is the next thing on the horizon. And so it's telemetry data. There's a new term being now used called trajectory data. So this is going to be a very interesting data management problem in my opinion.

0:23:28.4 Joseph Morais: Yeah, I agree with everyone and what everything was said. Everyone's focusing on the quality of models and I think we're going to discover very quickly, if we haven't already, that it's going to come down to the quality of prompting and the quality of context. And my own anecdotal example, it's silly, but I'm going to give it to you anyway, is, you know, after a 15-year hiatus, I decided to join a fantasy football league this year and I didn't make the playoffs, so I'm doing pretty well. And I use Gemini to help me set my rosters. And I can't tell you how often Gemini still thinks that George Pickens plays for the Steelers, even though he was traded to the Dallas Cowboys this year. I have to remind it constantly. So I do not trust its advice. So I still ask it.

0:24:00.9 Joseph Morais: Prediction number four, AI will apply increased pressure to existing databases. So agentic AI, the the idea is that we'll increase query volumes by orders of magnitude far beyond what humans do today, and systems will be struggling to keep pace with today's demands. Adi, what are your thoughts on the need on these needy data-hungry agents?

0:24:34.5 Adi Pollak: Yeah, I think the way it's framed in the report is a really polite way to say we're about to accidentally DDoS our software and databases.

0:24:45.9 Joseph Morais: Yes.

0:24:47.0 Adi Pollak: Because we're giving a tool to automate things and make it very fast to develop these systems. And I don't know if our databases, it depends on the databases as of today, you know, they're ready for it. And some databases already struggle with human use or some application use that we're developing. And now we want to add another layer of these autonomous agents that, you know, continuously going to query unknown things and we're giving them templates of things to query without the specific query. It's a huge change. So I think, yes, increased pressure, it's definitely a nice way to say it and I think it's true. And I think most legacy stacks will fail. The solution of the observation of change data capture and caching is a way to kind of patch the whole. But we will need to go and fix core systems.

0:25:41.3 Joseph Morais: Right.

0:25:41.8 Adi Pollak: Because if we're just building those expensive plumbing around the databases to keep them from bursting, eventually they will burst. We need to think better on how the access is managed. What's the access rate control? How do we make high-speed decision based on all data? And also there's a cost attached to it because database throughput is not free at the end of the compute is not free. The more we demand out of that depending on which tool we're using, the more we're going to pay for it. CDC and caching is definitely going to help, but we will need the larger solutions around database and I think some legacy database are going to slowly evaporate, put it that way.

0:26:25.6 Joseph Morais: Yeah, I agree. Sanjeev, let's talk about the industry perspective.

0:26:30.0 Sanjeev Mohan: The runaway successful use case in 2025 was coding. And when Replit, Cursor, all these took off and literally rocket ship success, they started using Neon as a database in the background. And then we know what happened and the Databricks went and bought it for a billion dollars. So this is a use case of database that literally to hardcore data, people like myself came in as a shock because database stores your transactions or your analytical data long term. It's there for decades, you know, and then suddenly we find out these databases are going up and coming down all the time. That doesn't sound like a need for a database. In fact, I sometimes wonder, why do I need a Postgres database for that? Why bother with Postgres as a front end, right? No one's writing a SQL. You could just do a low-level API call and remove all that overhead. So this is a very interesting space because the question then begs is, do I need a Postgres database? Do I need maybe just a key-value store? Or maybe I can use a message queue to store this because I have persistent storage. So there are too many choices and I don't think we... This prediction, by the way, is so linked to the previous one.

0:28:03.2 Joseph Morais: And then over to you, Will.

0:28:05.9 Will LaForest: I really like the way Adi put it in terms of DDoS-ing the databases. But I will say, in reality, that's not really what's going to happen. And the reason why is because, if you've ever tried to implement a new system on an existing like system of record, like just breathing on it is like getting a root canal. Like those things are guarded by the DBAs at places. So I just recently did a project with an insurance company, which talk about an industry that's going to benefit from Gen AI, the use cases. But like, it took us a year just to get permission to use the extreme API to do CDC, which is not even adding that much load because they are scared if you do anything that's going to fall over. So I think that is like, in theory, yes, if we just let everyone access any database, but then in practice, in real enterprises, that doesn't really happen. So I think the upshot is that it's just going to slow down the adoption if they don't figure out how to offload. As we made this transition to sort of larger like data analytics platforms. I mean, heck, if you want to roll it back to 50 years, this is why we created data marts, right?

0:29:16.1 Will LaForest: So it's like, we've seen this pressure before. It's the same thing. Now it's going to be even greater. And it's just another thing we got to figure out how to like protect those systems of record so they don't they don't keel over, I guess.

0:29:27.9 Joseph Morais: Yeah, if I had to bet on any of these predictions, this one would be like, I think the safest money. We're going to hand MCP's, you know, tooling over to ten thousands, millions of agents. They're going to query the database and it's going to have some effect. Even if the increase is 10% for many systems, like you mentioned, Will, 10% will crush it. Oh, yeah. There's there's single digit that they have.

0:29:50.3 Will LaForest: 1%

0:29:51.8 Joseph Morais: Exactly. So we're talking about order of magnitude. So this one's there's no question. And it's going to have to be addressed both at the data destination, the data source, and at the agentic layer, where, as Adi mentioned, we got to figure out how do we build guardrails? Because we want our agents to have agency, but we don't want them to destroy everything. And that kind of perfectly rolls in the prediction number five. AI will drive cybercrime to unprecedented levels. So they're saying cybercrimes could reach 18 trillion a year. Can't even like visualize what 18 trillion of anything looks like. Deep fake fraud's going to be crazy. Automated malware creation, right? Or you just give it a prompt and say, create things that just mess with people or hijack computers. By 2026, attack volume could double. Will, I know you in particular work with data streaming and security. Is this a legitimate threat?

0:30:47.9 Will LaForest: Absolutely, it is. It's really scary. Just an amusing anecdote, which is one of the things that caused me to sort of fixate on this so much, was like, I had a meeting with a bunch of CDOs. Nothing to do with security at all. We were talking about, of course, AI and we were talking about various other things, but I swear to God, it kept on coming back to that same topic over and over. They're scared to death. They are scared to death because this is, you know, there was already an asymmetric relationship between attackers and defenders even before AI, i.e., it is a lot easier to do cyber attacks than it is to catch them and remediate them. This just increases that, I think, because it's easier to use AI to attack than it is to use it to defend. Because of course, large language models that, you know, the volume of data, you can't comb through all that. The token costs are incredible, but you can use it to generate like pretty clever attacks. So, yeah, absolutely, I'm convinced. This one's scary too.

0:31:50.2 Joseph Morais: Sanjeev, what about you? Are you hearing things like this? Fears in the industry?

0:31:53.3 Sanjeev Mohan: Yes, the prediction is correct, but it's like saying the sky is blue. And by the way, I'm not a security geek, so maybe my perspective is very naive. You know, I think that LLMs may be more adept at understanding attack vectors better than our rule-based systems in the past. And again, I'm speculating here. This is not my area of expertise, but I'm just thinking that when the internet came out, there were new scams that we never had to worry about before that. And then before that, there were telephone scams, you know. It is going to happen.

0:32:38.5 Joseph Morais: Yeah. And how about you, Adi? What are your perspective?

0:32:41.7 Adi Pollak: Yes, it reminds me the story from I think about a year ago, one of the workers of one of the companies essentially was invited to a meeting with the CFO. And so he joined to a meeting with the CFO because like a video conference, virtual meeting and so on. He heard his colleagues in the background. He saw familiar faces, familiar voices, and the CFO asked him to transfer, I think it was $25 million to a specific account. And that's their job is, you know, he's responsible for the finance and he does these transactions. And so he did. And that later on was discovered to be deepfake. So someone completely studied that organization, created a deepfake of the CFO, knew who was the employee in the company that has access to actually authorize and do these transactions. And that's scary. And I think we'll see rise of zero-skill criminals because today every script kiddie, you know, can leverage coding tools, AI coding tools that are available today and started automate that malware creation and deepfake fraud and even researching companies and so on. And our best way to mitigate it is kind of using AI for good.

0:34:03.9 Adi Pollak: So the defensive AI arm race needs to fight malicious AI activities that are happening. And I think it's a great opportunity for people that has that expertise in the security space to actually build the companies that we need for the future.

0:34:20.6 Sanjeev Mohan: Can I add something? I talked with people who are scared. So just like Will said, who is scared to death? Like if you are running a business, you don't want AI to put AI in production and then behave erratically and expose all this data. So the fears are all genuine and AI has a lot of work to do. But I talked to people and they literally are convinced they're going to wake up one day and they're going to find out that this was all a hype and AI was so unreliable that poof, it's gone. It's over. Don't worry about AI. Just go back to normal life. I don't think we ever, ever going to go back. Just like we cannot imagine going back to pre-internet or worldwide web days. We are not going back. I think we need to think about will AI always be this unsafe or will there be gradual improvements and maybe it'll take two years, five years, 10 years. But AI is not going to stop like some people think.

0:35:27.7 Joseph Morais: Yeah. The genie's not coming back in the bottle. There's no question. Anyone believes that, I don't want to call them naive, but maybe they haven't seen as many cycles as we have in technology. Technology takes off and it takes off and it continues to persist. As Will mentioned, it is much easier to use these tools to craft a sword than it is a shield. But as Sanjeev mentioned and Adi mentioned, I'm optimistic about using, you know, AI-powered firewalls, AI-powered antivirus on people's computers. Even with some of the telephone scamming, people have set up pretty convincing bots that talk and just waste these scammers' time. So I'm hoping we see more of that than the other side, but it's a scary future, but I'm still optimistic.

0:35:59.0 Joseph Morais: Now, looking at our next prediction, AI will accelerate enterprise investment in data governance. So the theory predicts that there'll be a major rise in specifically large enterprise investments in data governance. And not just in regulated or public sector, but everywhere. And for the uninitiated, data governance is this concept of ensuring that data is of high quality as it ingresses some type of system, that there's some type of lineage so you can track the governance or the data as it flows through your systems and perhaps transforms, and that you also can provide things like metadata about the data so you can know where did this data come from? What does this field mean? Sanjeev, I would love to start with you and your thoughts. How does the industry view data governance and do you think AI will drive adoption?

0:37:01.5 Sanjeev Mohan: I love this prediction because I am on a mission to tell people, "Don't think of AI as a brand new separate piece of a separate silo in your stack or a separate initiative. It's an extension of data." AI is a use case of data. It's doing just like machine learning they predicted. Now we're doing generative stuff using AI, but it's still data governance becomes extremely more critical. Stakes have risen. It used to be that if my data governance sucked, which was the case in most organization, I could go fix my spreadsheets and make sure everything was...

0:37:49.6 Joseph Morais: Yes, I could fix it downstream. Yes, exactly.

0:37:51.4 Adi Pollak: Yeah.

0:37:51.5 Sanjeev Mohan: Now you can't do it because especially if agents are going to work on it. So I love this prediction and data and AI governance is going to go hand in hand. AI governance changes just like everything, whether it's ETL between structured and unstructured. Governance with AI is going to be very different because you're governing PDF documents. How do you know... In a database where you've got, you know, somebody's salaries and age groups and all, you can put rules and say, "You know what, each can be about this or this is PII information and social security." But when you have a stream of data, and life is but a stream, how do you know that all of a sudden somebody in their Slack channel said, "Hey, guess what, my credit card isn't working, here is a number."

0:38:49.6 Joseph Morais: Right.

0:38:49.7 Adi Pollak: Right.

0:38:50.4 Joseph Morais: So Adi, let's pass it over to you from the community side of things. Do you think this is what's going to push these enterprises into finally taking data governance seriously?

0:38:59.0 Adi Pollak: Yeah, when you think about community and developers, data governance is the least loved area to fix. And if I'll think about 2026, I'll call it like the great bureaucratic panic of 2026 is going to be that for AI. And for a couple of reasons, because for the last couple of years we were really obsessed with the magic, the chatbots, the image generation, the agents. You know, this is a lot of fun, but you know, now we're kind of waking up into that hangover phase of, oh, there's a liability here because the model itself is only a black box. But we have to know, we have to audit the record. So what did we put into the model? Who authorized that data to be used? Was the data right or was it outdated? We want to think critically about how we build that. And also we want to think about data lineage systems, right? Tagging, debugging. If I have agent that does 50 steps and so on, I want to be able to debug what the agent did and that's also part of governance because I need to know, I need that observability system to tell me what happened in there. And it becomes kind of like a in the data world, we kind of kind of like a, we look at that kind of like archaeology nightmare because you have millions and trillions of records and it's a needle in the haystack.

0:40:25.4 Adi Pollak: Something went wrong and you have to figure out what what went wrong. So how do you make sure you are setting yourself for success early on when you are building these systems as an engineer to not get into that spaghetti of data and trying to understand what what didn't go well. So I think architects, developers and so on will need to prioritize that. It's probably not the fun exciting part of building AI agents, but this is the reality. So I think yes, 100%, there will be some tax on innovation because that means that, you know, kind of like the cheap era of AI, of of the magic, of excitement, of being able to produce something, the POC, it's a little bit over. Some people might call it AI Winter. I don't think it's AI Winter. I think it's a second day of the majority, you know, as we get older and we get more into the world of enterprise and we need to make the right decisions around what the model does and we need all these, you know, governance, auditable, replayable and so on, that will become critical. But this is also the phase where we'll start seeing ROI from these investments as well. So, 100% believe in that. A lot of developers are not going to enjoy it.

0:41:45.2 Joseph Morais: Just like documenting their code. They don't like doing it, but they got to do it.

0:41:50.0 Adi Pollak: Right. I mean, AI can help you document today. What's, you know...

0:41:53.6 Joseph Morais: That's true. Over to you, Will. I'm curious, you know, I know with Confluent, we always preach, start with data governance, right? So do you think AI will finally be the thing that pushes some of the larger enterprises finally to buy into that?

0:42:10.2 Will LaForest: Yeah, I mean, I think that's, like, there's been some value for the regulated industries because they're so risk averse, that's why they've implemented it. But I do think for just broadly speaking, this is going to force a lot of businesses' hands, if you will. Adi's right, developers hate it, but executives want it to help mitigate the risk of applying AI. So, by the way, it's really hard thing to do. Like it's easy to say it's going to be like, oh, everyone's going to do it, but like,

0:42:37.9 Joseph Morais: Greenfield, maybe, but you're right, it's very hard to do.

0:42:41.6 Will LaForest: I mean, take lineage. It is very difficult to do cross-system lineage and off-top data flows to a lot of systems. But I think ultimately this is going to push some investment and some more innovation in data governance.

0:42:55.4 Joseph Morais: Yeah, I agree. I look at AI as just another data challenge, right? It's probably the greatest data challenge that most of us will experience in our lifetime, but it is what it is. And it's applying pressure to data that we haven't seen at an unprecedented level because of how quickly things are evolving. And any deficiencies you had in your data, whether it's lack of integration, lack of scale, or lack of governance, are all going to be exposed once you put that AI layer on top of it.

0:43:29.0 Joseph Morais: Now, our next prediction is that Apache Iceberg will become the standard for cost-effective cold data management. The key being, especially, or the reason that this prediction called out Iceberg is for long-term retention, compliance, and AI model training. Now, Adi, I know you've openly discussed Iceberg. Do you agree? And assuming yes, why do you think the industry is headed this way?

0:43:50.9 Adi Pollak: Yes, it kind of hints on the war of table formats and what happened in the industry. I agree. I think, you know, looking at all the optimizations, look at the underlying infrastructure of how Iceberg is built. I think it does make sense. It does win in the industry as of today. It does have everything we need at this point to be a cost-efficient cold data management with the indexing capabilities and optimization for different file format, with the transaction management of versioning that we need as well. And by the way, in the AI world, we still need some level of versioning of the data. There could be solutions that will merge a couple of pros, kind of the advantages side of a couple of formats. So that could happen in the world as well, especially knowing that some companies have the most contributors to that space and might want to specifically put out their solutions that combine the goods from all worlds of their existing table format and Iceberg specifically. So there is an aspect of politics in that as well. But yeah, I think it will win. I think for years vendors tried to lock us into proprietary format, and Iceberg broke that lock and kind of helped turn that world of storage layer into a commodity, something that everyone can store and use and it's relatively cheap.

0:45:16.7 Joseph Morais: Excellent. Sanjeev, your thoughts?

0:45:18.9 Sanjeev Mohan: If you look at what's going on with all the providers, including Databricks. Databricks, by the way, owns both Iceberg and Delta. These are open standards, so they don't really own, but they have the developers. What happened was when we first got into the cloud and Snowflake came out, it was such a breath of fresh air because we could separate, disaggregated compute from storage, and we didn't have this monolith. But what we didn't realize was that, yes, we disaggregated storage and compute, but they came from the same vendor. But now with Iceberg, we have unbundled it. So I could use Snowflake or Databricks, but I could store my data in Iceberg Table. That Iceberg Table I could read in Redshift or BigQuery or Cloudera, or I could just do my Spark jobs on the same data. So the beauty of this whole Iceberg open data format is that I have one copy of data, and I have different compute engines. So I'm doing Pandas for something and DuckDB for something else and the other ones I mentioned. So this to me is truly a game changer. And if you look at Snowflake has published their benchmark numbers.

0:46:37.4 Sanjeev Mohan: The performance of their proprietary storage versus Iceberg Table is the same. And there's another big piece on top of that, which is the catalog. And the Iceberg REST catalogs are now getting synced up with all these other data catalogs. So we truly are moving into a world where I could have Salesforce and SAP and their catalogs are talking to Unity Catalog or Polaris. So we are truly in a multi-engine vendor agnostic where storage has become a commodity and you can bring the best compute engine for your use case.

0:47:22.5 Joseph Morais: Absolutely. Now Will, let's end with you.

0:47:24.5 Will LaForest: I think this prediction was was actually a little bit less about is industry going to adopt it because I think that that's that ship has sailed.

0:47:34.7 Joseph Morais: Right.

0:47:35.4 Will LaForest: I think it's really focused on sort of the cold storage. And this actually was just honestly, if you peel back, within Confluent was maybe the most contentious one, simply because there's a lot of Databricks and a lot of people like Delta.

0:47:40.7 Joseph Morais: Yeah.

0:47:41.4 Will LaForest: But I think the reason why we focused on Iceberg is not because we intrinsically thought for, you know, it was necessarily better than Delta. It was specifically for cold storage. And the reason why is a lot of times the highest volume cold storage is things like observability. And the sort of teams that are managing observability and cybersecurity are not the same people as the data analytics. So even in an organization where they're using Databricks, oftentimes offloading from their SIMs or their observability tools, Iceberg is just the most natural thing. It's just the easiest thing, particularly because if you're in AWS, you can use something like Athena to query it, right? So I think that was in every single, like every single CDO or CISO I talked to is like they're buying into this pattern. I can't put all my observability or cybersecurity data into Splunk or Elastic or whatever, but it's still queryable in Iceberg and that's awesome. That's way better than it used to be, right?

0:48:50.5 Joseph Morais: Yeah, absolutely. And you know, first of all, from a marketing standpoint, cold storage and Iceberg, they got that figured out. But more seriously...

0:48:51.1 Will LaForest: The branding.

0:48:52.2 Joseph Morais: It's great branding. I mean, I think this is a no-brainer, right? Especially for cold storage. I think the vast, vast majority of data will be stored in some type of open table format in something that looks like S3, some type of object storage. And it just makes too much sense in terms of the durability guarantees, the cost and the scale.

0:49:12.6 Joseph Morais: Our next prediction, your AI strategy will need an independent data plane to avoid overcommitting. So the idea here is that LLM vendors are building richer ecosystems. The risk is being locked into platforms where operational data is entangled within a single vendor's ecosystem. Sanjeev, what are your thoughts on needing an independent data plane to avoid this pitfall?

0:49:41.6 Sanjeev Mohan: And to be honest, I don't understand this. We've always had data plane and control planes, so I'm not understanding what exactly is a prediction here.

0:49:52.6 Joseph Morais: Okay, this is a perfect one to pass over to Will.

0:49:55.4 Will LaForest: Yeah, I think so. The key here was, and we already touched on this a little bit earlier with the context, which is as the frontier model providers are adding more services to make themselves more sticky, that's industry lingo for lock-in, especially with data gravity, which has for decades been one of the biggest lock-in factor, that you have to be careful not to like solely depend upon one of the frontier models to do manage all your state and all your content and context. So, i.e., it's okay to use them as optimizations like threads and open AI, for instance, but you need to be able to have an independent sort of data strategy. You want to call it a data layer, a data plane, to help make sure that you can still switch between these models instead of getting locked in to an ecosystem.

0:50:53.2 Joseph Morais: So Sanjeev, with that context. See, we've gone full circle, the context here. What are your thoughts?

0:50:58.6 Sanjeev Mohan: Yeah, so I don't know if data plane is the right thing. Data strategy I can understand. So basically what this what this thing is saying is that don't don't bundle your data and LLM together because you may change your LLMs. So 100% I agree. Data plane to me is putting data in your VPC, your account, so only you, the organization has control, not the cloud provider. And just like we did that for the cloud, we should do it for for AI as well. There is absolutely no guarantee which model will win in 2026. Every single week we have a new model and a new benchmark that beats the old models. So this this journey is going to continue. Protect your data. I tell my end user client, "You have only one moat, and that moat is your data." That's it. Everything else, performance and models, all those things are constantly changing, but your proprietary data is your ticket to success.

0:52:05.5 Joseph Morais: Excellent. So Will, I think we kind of already got your take. You clearly agree with this, so let's pass it over to Adi.

0:52:11.9 Adi Pollak: Yes, I will, first of all, thank you for clarifying because I was so a little bit confused about data...

0:52:16.7 Will LaForest: By the way, for the record, I don't mean to interrupt you, Adi, but like I wanted to call this prediction that you needed a prenup with your LLM provider, but they decided that was not professional enough, so they went with this data.

0:52:31.2 Joseph Morais: I like that. I like those takes. Yes.

0:52:31.7 Adi Pollak: Yeah. Prenuptial agreement for your AI strategy is. There's definitely a way to look at it. Yeah, it takes me back to compute versus storage and separating these two. So looking at that from how do I separate my storage, my data platform from my actual LLM where I'm running the models. I think it's smart because it enables us to chain things on the model level pretty fast, knowing that every day there is something new coming out. We want to be able to use the best model, we want to be able to use, to get the best pricing with the best rate and so on. And when I think about the world taking us a little bit back to previous prediction of context engineering, at the end of the day I see it kind of like managing, managing our data, managing our storage versus if it's a hot storage, cold storage and so on. These are things that we need to do and think critically about them. On the other hand of that prediction is simplicity.

0:53:00.0 Adi Pollak: If I have a platform that I already manage my data in it and also enables me to integrate and use different LLMs, GPT, Claude, Gemini, whatever, it makes it very simple to adopt in the company. Yet, as we know about that, you know, it could be that a vendor would raise the prices.

0:53:45.4 Joseph Morais: Right.

0:53:45.5 Adi Pollak: It could be that the solution wouldn't be to the quality that we expect because the company is not specializing in building those data storages and data access solutions for us. So a platform can be everything or it can be nothing based on the quality of the solutions that they provide. And this becomes, in my opinion, really critical. What do we need, how do we build it? And in our ecosystem, what we have right now in the AI era, because everything is so new, we're so early to it, you know, being married to a specific platform that gives us everything is a huge risk because we don't know who's going to stick around in the next couple of years, and we don't know who is going to give us the best solution and still the competition is fierce.

0:54:35.5 Joseph Morais: Yeah, I mean, I personally, I absolutely agree with this one and I think it really complements the iceberg prediction and the governance prediction. And then there's a theme here, right? You want to make your data high quality and you want to make it pluggable. You want to make it accessible to other compute engines. And if you're commingling your compute with your storage, that might bite you in the butt.

0:54:45.9 Joseph Morais: So our next prediction is that early adopters of durable execution engines will gain a competitive AI edge. And again, because durable execution is not my space, I'm going to read something to the audience to make sure I have it right. Durable execution is a practice where code progress is persistent, allowing applications to automatically recover from crashes and resume exactly where they left off without rerunning completed steps. This ensures that services never lose state, remain resilient to failures and effectively make interruptions irrelevant. That's right, that's my reading voice. And examples of durable execution vendors include Temporal and Restate. The idea here is you get increased adoption because of fault tolerance, workflow persistence and simplified event-driven architecture. Will, how about you start with this one? Do you think durable execution is a key enhancement to building agentic AI?

0:55:52.0 Will LaForest: I think it certainly helps. This was another one where there was much debate. I'm really interested to hear what Adi has to say because she's a little bit closer to the people that are like developing this. But like clearly, agentic processes are like really complex, lots of pieces, they're and the participants are really flaky. So like managing that is hard. You know, human in the loop is an important aspect. So sometimes you have to wait for minutes or seconds for a human to respond. So there's a lot of complexity that you have to handle in building these systems that if you adopt a durable execution engine, it will take care of for you. I guess the question is, is that going to happen or not? Like certainly Temporal's adoption's going up a lot and Restate starting, but like that was the big question. I think the benefits are there. I just don't know if the adoption is going to happen or not because there's also like LangChain and LangGraph and they have their way of building agents. So I'd love to hear what Adi has to say on this one actually.

0:56:52.2 Joseph Morais: Me too. As a matter of fact, Adi was the next on my list. So you're reading my mind, Will. Let's go ahead.

0:56:57.8 Adi Pollak: Yeah, absolutely. I think it's essentially staying, you know, AI engines architecture is too weak and too slow to function without a safety net. So if anything's happen in the middle, happen in between, we need that save game button to to the code, to to everything that we do. A crash halfway through a task for an agent can be very expensive in terms of time and in terms of dollar sign. You can take sometimes a model can take 10 minutes to compute and think and do all the research it need to do, and you know, five or ten dollars in compute. And trying it again is expensive. Like how many times do you want to try it? What is the server hiccups that you're willing to take and so on. And I think solutions like Temporal, like Restate, like I think there's also another company, Cadence, if if I remember correctly.

0:57:47.8 Will LaForest: Little Horse.

0:57:48.0 Adi Pollak: Little Horse.

0:57:48.1 Joseph Morais: Oh yeah, Little Horse, right.

0:57:50.8 Adi Pollak: Yes. I think their solution is very interesting. Not because AI is flaky, but because it gives us a wrapper to know what is happening in the AI as it runs. So pairing these engines makes it very, very interesting, especially if we're pairing it with other solutions that enables us, you know, the real-time compute across these things. From a developer point of view, just to think about constructing a team that can do that, that's going to be very hard. Bringing people with expertise around Temporal, you know, sometimes you need to pair it with Kafka, you need to pair it with Flink. So there is a challenge of of skills there. There's not a lot of people that know how to use it and how to do that. But I think it's important. It's important that, you know, if we have a crash, if we have API times out from the model, that the system as a whole doesn't just freeze or wait, but we have a solution, we know how to wake up from that and continue to to execute, to give the answer to the user or to the next agent and so on. So I think this is this is going to be very interesting from a developer point of view.

0:58:59.9 Adi Pollak: It's an exciting technology for people that likes the low level. I can tell you that for sure. There's a huge excitement around that in in the community. It is though a massive investment.

0:59:10.1 Joseph Morais: Yeah. Great insights. How about you, Sanjeev? What are your thoughts on durable execution?

0:59:14.9 Sanjeev Mohan: I had so many thoughts cross my mind. First of all, when software applications became really popular, if a application crashed, we basically our laptop would maybe we got blue screen of death or something like that. We rebooted it and then we start reusing it. So not a problem, right? I mean, no harm done, except for I lost time and things like that. Kubernetes came and they said, don't worry, we will auto heal and auto restart and we will bring that that level of reliability. But it took Kubernetes more than 10 years, or actually it's 10 years old as of last year, to really figure out what happens when a database crashes, especially a distributed database. All of a sudden your transaction was half committed. You can't have have that. So now I it's so interesting to see that now we are applying the same principles to AI workloads and LLMs. And as you said, if these serves malfunction in middle of it, the cost is is much more than that of a database or an application. So, like I said earlier, it stakes a higher in in AI. And so we need guardrails in place to make sure that we have consistency, we have recoverability.

1:00:39.3 Sanjeev Mohan: And what I'm actually hearing from both of you is how important observability becomes. So durable execution sounds like yet another important check box, if you may, to make LLMs really production ready.

1:00:58.5 Joseph Morais: Yeah, I agree too. I think it's just another tool in the toolkit for building around agentic AI, right? Because there's other non-agentic use cases where durable execution would be important as well. I think at AWS, they announced durable execution for Lambda, which is one of my favorite services. Do I think it's necessary for all agentic workloads? Maybe not. There's going to have to be some balance between cost, cost of implementation, complexity. But I do think for the right AI workloads and then other workloads, it will be absolutely critical.

1:01:34.9 Joseph Morais: So we're here. It's crazy. I'm having so much fun. Time is flying. We're at our last prediction, and this one is that improvements in generative AI will help businesses finally address legacy tech debt. So again, the assertion here is that AI will finally help large organizations tackle legacy modernization. So far, this has been an extremely slow and expensive process, literally for decades now. I remember working at a insurance provider and they were trying to take a million lines of COBOL and using some type of conversion tool, blue something, and convert it to Java and then run that on Spring, and they thought that that was just going to work. Guess what? Guess what? It didn't work. Adi, Let's start. We started with you. Let's complete the circle. Do you think AI will pour fuel onto the modernization fire?

1:02:20.3 Adi Pollak: Yes, I think so. First of all, what's AI is best at? It's best at automation, pattern matching and kind of replicating everything related to pattern management. Let's be honest about it. This is what, you know, the greatest strength of AI today. If we're able to translate whatever legacy tech that we have into translating from an old system into a new system and leveraging AI to do that, this is where we win. And I saw it and I spoke with a lot of developers that are doing it right now. And some of these developers as, you know, the same task of addressing legacy tech death, of moving from one system to the other for years. And just in the recent year, they were able to accomplish it just within a couple of months because they leveraged AI in order to study the pattern of what exists in that legacy system and then translated it into requirements from the new system and then started building it into the new system. There's a lot of work that we do was as engineers, as software engineers, architect technologies into understanding what are these requirements. And sometimes if without AI, that means we have to dive into thousands and thousands of of code files and lines and architecture description and and so on.

1:03:30.0 Adi Pollak: So I think AI is definitely going to accelerate everything related to to that space with automation, with migration. It will make our life, it is already making our life much easier.

1:03:40.6 Joseph Morais: I love that take. Sanjeev, what are your thoughts?

1:03:43.0 Sanjeev Mohan: I want to give an example of what's happening in the industry. Let's take IBM, for example. There are a lot of IBM customers that are using COBOL on band-aid. They don't even have anyone to manage COBOL anymore. They've all retired. But, you know, those systems are mission critical. They failed to to move code off COBOL into something more modern. So what they're doing is they have their own family of models called Granite. These are not general purpose models like you find from others, but they train it on COBOL documentation, COBOL manuals, COBOL code at scale. And then they train it on Java, the the newest release of of Java, and they're starting to use LLM to finally modernize these applications in an automated manner. There's also like a lot of differences, like little Endian, big Endian, EBCDIC to ASCII. It's very complicated. It's not so manually it would take forever to do these these migrations. But LLM is now coming to rescue. And and they're actually, they have a great book of business migrating people, still keeping them on mainframe. See, because IBM doesn't want people moving off mainframe because they will pay the price there, but they're modernizing. So this this is actually already starting to happen.

1:05:20.2 Joseph Morais: And how about you, Will, what are your thoughts?

1:05:22.9 Will LaForest: I mean, I've seen it firsthand already happen in 2025. I think mainframes are still, I think Sanjeev is right. It's going to get easier and easier with mainframes because now we're having models that are trained actually on the COBOL. Parochial languages like that still if you just take it out of the box, Claude or something, won't work very well, but it's going to get better. But I have seen already like TIBCO and and JMS applications being successfully migrated and put into production that previously we would, it's just customers would have been like, "No way. I'm not going to touch this." Also, by the way, when Adi was talking about looking through like documentation and code, like developers don't want to do that, by the way. So like even if they could have done it in the past, they didn't want to. So they always do a cool project. So, yeah, this is going to help. I don't know how soon it's going to help us, but it's definitely, it's going to make an impact in 2026.

1:06:20.0 Joseph Morais: Yeah, and I also agree again with that example, the code converter. That's the wrong way to do it. But with, with these AI tools, whether it's reviewing documentation, reviewing the code in a more intelligent way instead of just translating this function into that function, or as Adi mentioned, having it look at the inputs and the outputs of a system, study your systems and build brand new code that ultimately just gives you the inputs and outputs that you want, because that's what you want. You don't really care about the things in the middle. You want what's in, what's out, and you want it to be accurate. I do think this will finally get some people over the hump.

1:06:50.0 Joseph Morais: So that's it. But before we go, for each of you, and we'll start with you, Sanjeev, what is the most single important step an organization should take in 2026 to get ahead of these predictions?

1:07:08.9 Sanjeev Mohan: The most important thing for data leaders, data, by that I include AI, is to take a long-term view. And we live in a world where for promotion sake, for padding our resumes, we just try to to say, "Okay, here is a new model, let's just go do things." But take like governance should be built in, for example. So think of it as a program that will last for maybe a couple of years and not a quick win.

1:07:40.0 Joseph Morais: I love it. Will, what's your most important step?

1:07:42.7 Will LaForest: I think leaders should all send me $1 million. And then it just magically they're going to be prepared. So let me just give you my Venmo account. The thread that all these things have, and obviously we're a data company, but like I think we can agree that all of these come down to just like data management and having the flexibility of like using your data, providing the right data at the right place at the right time with the right cost. And so I just think that continuing to there's obviously the people like the data engineering skills, but just investing in how to have a very flexible and open data architecture. I know that sounds very generic, but that will pay dividends for all these things in my opinion.

1:08:27.3 Joseph Morais: I love it. And Adi, let's close with your thoughts.

1:08:30.6 Adi Pollak: Yes, I think as we enter day two of the AI era in 2026, I think two things are critical. One is trust. Build the systems that we can trust and we know how to govern and we know what happens in them and we know our users and customers can trust. And the second thing is utility. If we harden the system too much, no one will be able to make any progress. So we want to balance between that trustworthy system and utility. And this is what I think architects and developers should think about in 2026.

1:09:05.5 Joseph Morais: Excellent insights from all three of you. For me, the most important step you could take is read the predictions report. That's why we're here, folks.

1:09:13.4 Sanjeev Mohan: I was thinking, how many plungers can Will buy for 1$ million?

1:09:19.6 Joseph Morais: This is true.

1:09:20.3 Will LaForest: A lot of plungers. A lot of plungers.

1:09:22.4 Joseph Morais: It might be easier, Will, to just get like 100,000 tech leaders to send you $10 each, but I don't know.

1:09:27.5 Will LaForest: Yeah, okay, that's probably more likely.

1:09:29.3 Joseph Morais: That's just me. So, for everyone here on the panel and everyone viewing at home, thank you so much for joining me today. I absolutely appreciate it. This was just fantastic. I can't believe how fast this went by. So if you're at home, the full 2026 prediction report is available right now and we'll link it in the show notes. Just click on it, get those insights. There's much more than we talked about in there and I really think it'll help you kick off 2026. Thanks, everyone. Thanks again to Will, Adi and Sanjeev for joining us and thanks to you for tuning in. As always, we're brought to you by Confluent. The Confluent data stream platform is the data advantage every organization needs to innovate today and win tomorrow. Your unified platform to stream, connect, process and govern your data starts at Confluent.io. If you'd like to connect, find me on LinkedIn. Tell a coworker or friend about us and subscribe to the show so you never miss an episode. We'll see you next time.