Our host, Joseph Morais revisits the most impactful moments across data streaming, event-driven architecture, and AI, showing how real-time data became the backbone of modern data systems and decision making in this year-end highlight.
In 2025, real-time data moved from important to mission-critical. From billion-event pipelines to AI agents reasoning over fresh data streams, the conversations this year revealed just how fast organizations are rethinking their data infrastructure and what’s possible when data streaming becomes the foundation.
In this episode, Joseph Morais curates standout moments from the year: the stakes of real-time data, the architectural decisions behind enterprise-grade streaming, and how AI strategy is shifting toward agentic, event-driven systems. You’ll hear leaders discuss architectures designed for zero data loss, solid governance, multi-agent orchestration, and why organizations choose Confluent as the data streaming platform for data streaming at scale.
You’ll learn:
The Guests:
Guest Highlights:
“We are processing events on a second-level latency end-to-end… it'll show up on your end customer's invoice in sub-seconds, and you're cutting users off for fraud, all done exactly once.” — Cosmo Wolfe, Metronome
“Security loves us, right? Because everything is immutable. We know who did everything and we know who can access what and who has been accessing what.” — Chris Kapp, HS1 (Henry Schein One)
“These are not dashboards… If you want to transform your business and take a proactive approach, reacting to signals as they’re happening, you need to shift your mindset and move data responsibilities left.” — Sean Falconer, Confluent
Episode Timestamps:
*(00:00) - Looking Back at Data Streaming in 2025
*(00:38) - Segment 1: Unique Streaming Use Cases
*(06:05) - Segment 2: The Power of Partnerships
*(11:23) - Segment 3: Data Streaming and Real-Time AI
Dive Deeper into Data Streaming:
Links & Resources:
Our Sponsor:
Your data shouldn’t be a problem to manage. It should be your superpower. The Confluent data streaming platform transforms organizations with trustworthy, real-time data that seamlessly spans your entire environment and powers innovation across every use case. Create smarter, deploy faster, and maximize efficiency with a true data streaming platform from the pioneers in data streaming. Learn more at confluent.io.
0:00:00.2 Joseph: Welcome to Life Is But A Stream, the web show for technology leaders who need real-time insights. I'm your host, Joseph Morais, technical champion here at Confluent. As we wrap up 2025, we're putting the usual format on pause for something special today. 2025 has been an incredible year for data streaming and I've had the privilege of talking to amazing technology leaders about it all. From foundational concepts of data streaming to cutting edge use cases with real-time data and AI. For me this year has been a masterclass in seeing what's possible. And I'd like to share a highlight of the most impressive moments. The moments that truly made me and hopefully you say, "Wow, that's possible now?" let's jump in.
0:00:48.8 Joseph: 90% of Metronome's engineering resources are spent building features, not operations. That's an amazing number if that's right. Highly available at scale, 10,000 invoices per second, billions of events per day with no data loss and extremely end-to-end latency. Is that a good description of where you guys are at thanks to data stream?
0:01:06.4 Cosmo: Yes, and just to put a little bit of a fine tooth on it, you've mentioned no data loss, extremely low end-to-end latency, you know, we are processing events on a second level latency end-to-end. So, if an event occurs in your business, it'll show up on your end customer's invoice in seconds and that'll be, you're cutting users off for fraud. And that's all done exactly, once. There's no kind of like bars on that and yeah, at the scale numbers that you mentioned.
0:01:31.1 Joseph: Yeah. Honestly be able to turn down like, you know, a trial within a second, it's just, I mean, how much better can you do that? You almost have to go into the future and if you figure that out, Cosmo...
0:01:39.4 Cosmo: We're working on it, if you're interested.
0:01:41.3 Chris: We care very deeply about protecting the HIPAA data, that's in our top three attributes that we plan for in our architecture. Securing this HIPAA data. We cannot lose that data. So, what we did is we heavily invested in a naming strategy on our topics. So, we have our private internal topics for each team that they can use and then they have the public ones that become that layer that they can consume from. The other thing is that there's a newish tagging feature that's available. All the governance on who's allowed to change that tagging and things like that, it's all built in there. Security loves us. Right? Because everything is immutable, we know who did everything and we know who can access what and who has been accessing what.
0:02:28.9 Alex: Our business is kind of built on top of data. A lot of it is data that people have been really generous and allowed us to use, right? We will train on that data and we like a better product. And that's kind of the promise that we try to fulfill is like if you let us look at your data, we will train tab on it. We like an even better tab model that you guys can use. But we think that that also bestows upon us a really serious responsibility to like not leak the data, not like do anything weird with the data, not let anybody else see the data, not let us see the data where possible, like kind of honor that trust that people are putting in us, and really only use it to kind of further the thing that they've given it for, which is like building the best model we can serve to our users. And so WarpStream kind of lets us do that in a way where the data sits entirely inside stuff that we control and have access to. And so we don't have to worry about like, Oh man, like if our data provider gets hacked, like what are we going to do? Like all the people's stuff is out there. Like it's a huge, huge problem being able to just say like all this stuff is locked down in S3, we have control to it, there's just nothing going on there is really, really, really important to us.
0:03:18.3 Joseph: So, what's the future of data streaming and AI agents at Airy?
0:03:22.7 Steffen: So, from my perspective it's really going towards the question of how do you build a multi agent system on a data streaming platform? Because this is kind of the holy grail from my perspective. So, I think this image of let's say having microservices that consume from data streams, this is not really new. I mean this has been around for years. This is kind of battle-tested to some extent. And I think the big change that we're seeing now at the moment also with the latest trends in the market is basically that there is a need really for, let's say building these agents, basically providing a way to standardize the tools and the context within those agents. So, this is something that Anthropic, for example pushed with MCP last year. It's now being adopted across the whole industry basically for the capabilities of the tooling and let's say the access to the data for the agent. So, this is something that we see that is highly relevant at this point as an integration topic. And Then we also see, let's say the announcement that Google made last week with A2A being highly relevant in that aspect as well, governing inter-agent communication.
0:04:40.0 Steffen: And this is really very very interesting from our perspective because when you as an agent know that there is another agent in your organization, you can kind of, let's say call that agent and kind of have these two or more agents work together on a specific subject. So, this is highly relevant from our perspective of how you build out these multi agent systems, especially in a complex organization where you might have in the end thousands of agents running in parallel together with your workforce in trying to provide a service to the customers in order to basically keep things running.
0:05:12.7 Joseph: Look folks, I spent 25 years in the trenches from crawling under desk for desktop support to racking servers, from DevOps to cloud engineering and I could tell you, things we used to dream about are finally reality thanks to data streaming. In the clips you just saw, you got to hear about the holy trinity of modern architecture. First Cosmo, Metronome blew my mind with sub-second latency. Literally processing building events before you can blink. Then Chris at Henry Schein One and Alex at Cursor reminded us that with great power comes great responsibility. Locking down HIPAA data and training AI models with total sovereignty. And finally Steffen and Airy brought it home. We aren't just watching the AI Cambrian explosion. Streaming is the engine driving it. Real time isn't future anymore. It's the baseline. You just heard incredible data streaming use cases. This doesn't happen in a vacuum. It requires a complete data streaming platform to stream, connect, process and govern all of your data. Let's hear why leaders choose to build their systems with Confluent. Roll the next tape please.
0:06:21.6 Ethan: I always say like I can build that myself. Ah, this is an open source library, I can host it. You know, like ah, don't pay that vendor whatever, right? And then when you realize that you have to do migrations, you have to maintain stuff 24/7 there are a lot of random edge cases you don't know until you deploy into production. Especially if you're breaking such a critical system. Because normally streaming happens almost at the extreme left, right? So, like if you break that system, every other engineering company is going to complain and you become, you're just become infamous, right? Like for no reason. And a lot of that normally happens when you try and host it yourself. And again I am sure there are experts that could do it pretty easily. But like I think most people aren't experts. And that was why we definitely always wanted to go for a managed service from the get go.
0:07:05.1 Joseph: Absolutely, especially when you have like if it becomes your critical system of record. Right? And some people aren't that far into data streaming, but when they are, you find that this becomes something that can't break. So, why not pass that off to somebody who's made it as unbreakable as it possibly can? And then of course having that level of support if things do go awry, which I hope they never do.
0:07:25.9 Joe: We started with open source, Apache Kafka and you know, for the most part within a very specific use case, it worked pretty well. I would say if you fast forward a year and a half later, you know, we weren't keeping up with the patches, we weren't upgrading the system. You know, once you start scaling it across teams and across systems and such, you'll quickly realize that if you don't do good schema management, version control, security practices, all that enterprisey kind of stuff that wraps on and lays on top of your foundation of data, you're not going to scale very well. And by looking at Confluent cloud in particular, we just said, okay, it's hosted by those that invented the technology so they obviously know what they're doing and you know, they'll have all those enterprise grade type capabilities built upon there. Plus the other thing that you don't get, they're going to innovate, you're going to innovate on top of the platform, whereas if we just host that ourselves, we're just going to use what's there.
0:08:17.4 Joseph: Once you're enterprise and you have hundreds of engineers, all of those things like open off, you mentioned access control lists, you also mentioned part of our governance package. We have a thing called Lineage that allows you to visualize all the flows. All of that makes sense building at enterprise scale, much more tenable and frankly, why run something that's just undifferentiated heavy lifting? I mean I've always had that opinion and I used to be one of your DevOps engineers. I used to be at a startup and I was the guy that ran Kafka and this is before Confluent Cloud existed. So, I realize and I understand that pain. What led Swedbank to choose the Confluent data streaming platform over various open source and vendor products?
0:08:58.0 Rami: Yeah. I think as I mentioned before, the main reason is that we actually wanted to make sure that we have a full package of services. We don't want to actually reinvent the wheel in developing, you know, what is need to be developed. That was the main reason why we wanted to use Confluent platform for the ecosystem that was really built around it. And what I like about it is that they keep their development of this platform in a consistent way. So, every time we hear that there is something good is coming basically and for example the RBAC versus ACLs, the accesses. It was a major milestone for us in adapting the platform and deciding that yes, we want to go with this. And still there are lots of things coming which is really good. And this is why we like that this product development is consistent and going in the right path.
0:10:02.7 Joseph: Yeah. Thank you. That's wonderful. I mean like, you know, I think that's ultimately what every ISV is ultimately trying to achieve. We're trying to put out products that enable you to do something faster so you can focus on the things that only your engineers can do and that's build specific value for Swedbank customers. And like only your engineers can do that. We figured out data streaming just like other companies have figured out their technologies. So, why not utilize something that's already battle-tested, proven and get you running, get gets, you know, get you off to a start quicker?
0:10:40.7 Joseph: Having spent more than half my life in IT, from swapping backup tapes to architecting clouds, I know the instinct. You look at an open source library and think I can host that. I can save us a budget line item. Those previous three clips are a wake up call. Ethan at Allium dropped the truth bomb. Do not become infamous for breaking the critical path because you wanted to DIY the infrastructure. Then Joe, Covetrus reminded us that day one is easy but day two, the patching, the schema management, the upgrades, that is where the pain lives. And finally, Rami at Swedbank brought the enterprise hammer down. When you need bank grade security, RBAC and multi tenancy, you cannot waste time reinventing the wheel. The lesson here? Stop babysitting the plumbing and start innovating on the product. Those use cases showed us what was possible. But what really got people excited this year was AI. And if 2025 taught us anything, it's that AI and real-time data are inseparable. Check out what some of our guests have planned for 2026 and beyond.
0:11:47.4 Joseph: What is the single most important takeaway you want our audience to remember about the future of AI and the critical role that data streaming plays in it?
0:11:54.7 Sean: Well, I think the biggest thing and I think, is really understanding why this fresh information is critical for building these systems. And I think that it's not always thought of that way because of just the historical way that we've used sort of traditional ML and analytics and we automatically sort of bucket using foundation models and agents into this like post-analysis step. But these are not dashboards. Like they could be dashboards. Sure, that's fine, that's the use case. But if you want a really be able to transform your business and be able to take a proactive approach, be able to augment a lot of your existing workforce, to help them be more efficient so that you're reacting to signals that are actually happening, business events are happening. You need to kind of change your mindset around this and be thinking about how do I actually take a proactive stance and know what's going on in my business? Shift a lot of that sort of responsibility left. And we've seen that kind of shift left stance in many, many businesses over the last several decades, certainly in my lifetime in business. You know, when I first started as an engineer, testing was essentially you write code, you checked it in, you threw it over a wall to QA department and then they would test it, they'd find problems, they'd throw it back over the wall.
0:13:15.0 Joseph: I remember those days.
0:13:15.4 Sean: And now we've, and I think if you told engineers at the time that they were going to be responsible for testing, they'd be like, whoa, whoa, whoa, we can't do that. There's no way. But what has happened is we shifted that responsibility left. Because the creator of the code is in a better position to essentially figure out what the test should be? It doesn't mean QA has gone away. People still QA these things, but now you're leveraging them to do something more than just like basic testing. And I think data has to go through a similar transformation of shifting. You know, the definition of the data product, the schemas, all that stuff left closer to the source. And those people also have a lot more knowledge about what that is. And if you do that, then suddenly it opens up new use cases around the AI and agentic AI that every company wants to invest in, but most companies aren't ready to be able to take advantage of it because their data systems aren't ready.
0:14:03.9 Joseph: What's the next feature or two that you're looking to launch with?
0:14:07.1 Jeffrey: I would say a data product linking. Because you're going to have all these multiple data products out there. How do you know to link the data products to one another? How are you going to match, how are you going to do that discovery of those data products? Exactly. So you need that. So, that's going to be something that we're going to offer because I believe for customers that don't say okay, we don't want you to build a data product, but we already have data products and they need to talk to one another. How are we going to, how are people going to discover those data products?
0:14:39.0 Joseph: So, it's almost like a relationship discovery, something like that.
0:14:42.3 Jeffrey: Absolutely, absolutely.
0:14:44.3 Joseph: I like that.
0:14:44.6 Jeffrey: Yeah. So, that's going to be something. I mean this is a fascinating world, especially with AI, with general intelligence coming out and how we could take advantage of that. Because you know, right now for the most part we utilize a lot of shallow learning, basic statistical. But it's really that looking at a schema and sometimes organizations don't use foreign keys, they don't use primary keys the way, so how do you make sense of that? And so that's really some of the challenges that we're trying to figure out now.
0:15:18.8 Joseph: I really like that because you're basically saying it does not just limit it to your use case, but as people start to build some of these gen AI systems and they feed them data, they can get into this pattern of tell me what I don't know. And that's really what you're saying. It's like, hey, I have these data products, I don't know how and if at all they relate to each other but if they do identify that for me. And now it unlocks these new opportunities for innovation. I gotta ask, what's next for data streaming at APFA?
0:15:45.8 Todd: In particular as flight attendants, we have these different schedules. Basically our monthly schedule comes out. It's a series of different flight sequences that each one has to fly. That information changes all the time. You know, we have delays, we have reschedules, we have cancellations. All of that information constantly changing and right now we don't have access to real-time information with that. I can definitely see, and we have kind of the proof in the pudding here of how this situation has worked that we can set up something very similar to that, in order for our members to be able to have that real-time access they don't have right now. So, I definitely want to be working on that.
0:16:26.0 Joseph: All right. We brought it all the way home with the topic everyone is talking about in 2025, artificial intelligence. But we already talking about chatbots writing poetry. We're talking about the plumbing that makes intelligence possible. I love the clip from Sean Falconer. He brought up a concept I've lived through in my DevOp dates, shifting left. And just like we stop throwing code over the wall to QA, we have to stop throwing raw data over the wall to AI. We need schemas and definition at the source. Then J3 at SignalRoom took that further. Using AI to actually discover the relationships between your data products. It's the ultimate tell me what I don't know moment. And finally, Todd at APFA reminded us why we do this. It's not just for fancy algorithms. It's so flight attendants aren't stranded at the gate because their schedule data was five minutes old. If your data isn't real time, your AI is just hallucinating faster.
0:17:21.9 Joseph: I want to thank all of our incredible guests this year for sharing their stories and innovations. To our listeners, thank you for making Life Is But A Stream part of your real-time data journey. This show is sponsored by Confluent, and if any of today's use cases sparked ideas for your organization, head to confluent.io to learn more about how data streaming can transform your business. We'll be back in 2026 with more customer stories and insights, for those who need to know what's going on in the data streaming world that matters to your organization. And if you haven't yet, subscribe to our YouTube channel and find Life Is But A Stream on your podcast app of choice so you don't miss out. Until then, I'm your host, Joseph Morais. Thank you for joining me. Happy holidays and see you next year.