Back to blog

Agent ready episode 1 with Langflow: RAG workflows & agent development

Auth & identity

Aug 19, 2025

Author: Stytch Team

Agent ready episode 1 with Langflow: RAG workflows & agent development

Check out the first episode of the agent-ready video series: RAG workflows & agent development. Featuring the CEO of Stytch, Reed McGinley-Stempel and the Head of DevRel at Langflow, Carter Rabasa.

Video overview

AI has forever changed what users expect apps to be able to do and developers are increasingly relying on Agents to deliver these experiences. In this session, you'll learn the foundational patterns and best practices for building AI-powered agents. We’ll explore how to balance emergent behavior with deterministic control, and how to design systems that blend autonomy with the oversight today’s products require.

You’ll come away with an understanding of:

  • The core challenges of managing memory, context, and tool orchestration
  • Strategies for combining short-term prompts, long-term embeddings, and procedural APIs
  • Practical techniques for debugging agent behavior and improving observability
  • How emerging agent platforms are evolving to close the reliability gap with traditional SaaS

The video includes a live demo and hands-on walkthrough to show how these concepts translate into real-world agent development, so you’ll leave not just with ideas, but with working knowledge you can apply immediately.

Full transcript

Reed: Hi everyone. My name is Reed. I'm the co-founder and CEO of Stytch, and I'm excited to be kick-starting the Agent Ready video series with all of you today. We've got a very special guest. We've got Carter Rabasa, who leads developer relations at Langflow. And I'll let you do a quick introduction of yourself, Carter, in a moment.

One of the things I'm really excited about is coming from your experience where you've dug really deep into what are the developer pain points and learnings and things that folks should keep in mind as they're starting to expose themselves to agents for the first time, think about building agentic flows.

And so this is a great introductory element of our overall video series, as I think we'll touch on a lot of things—maybe lightly—that will then become larger deep dives throughout. So welcome again, Carter. We're excited to have you.

Carter: Yeah Reed. I'm stoked to be here. So I guess, as quick introduction to me, my name's Carter. I live up here in Seattle, Washington, and I've [00:01:00] been a programmer the better part of my life. I started programming when I was maybe a sophomore in high school. Good old Turbo Pascal back in the day. So I've just been programming for a very long time.

I lead developer relations here on the Langflow team, and I've been in developer relations for over a decade now. A really important part of dev relations to me is just sort of making programming more accessible to more people. It's part of what we do—we're educators. We educate people on how to use our tools, but the education often exceeds those narrow boundaries.

So it's a part of my job that I love. And AI has obviously completely changed what it even means to be a programmer, and it's drastically expanded the scope of people who can build software. So it's never been a more exciting time to be in this industry.

Carter: Things are [00:02:00] changing super fast. If I'm being super honest, I got to AI late. I was a normal full stack web developer, working at a series of API companies. I got my start in DevRel at Twilio, which I'm sure a bunch of folks are familiar with. When I joined, they were a very anonymous company.

And I kind of took a chance. But now they're very well known. I really didn't get into AI until about a year and a half ago. So this is well past ChatGPT. I joined DataStax, which was a company that was getting into the AI space, shipping vector databases, and eventually acquiring Langflow, which is the product I work on right now. And DataStax was actually just acquired by IBM, so technically, I guess I'm an IBM employee. But I just say this to say that I'm new to AI. I have not been in machine learning or AI for the past three, four, or five years. [00:03:00] And I can really empathize with people that are just getting started. And I think that the good news is it's never been easier to build with AI. The tools just keep getting better and better and better. We'll go into that a little bit. And obviously, agents are a big part of what's happening right now.

Reed: And what's interesting there is, you know, in the conversations we've had in the past on AI and AI agents, I think it's actually kind of a strength of yours that maybe you've come to it over the last one and a half, two years, rather than someone that's been steeped in it for 6, 7, 8 years. 'Cause I've noticed—and this is probably partially also just 'cause of your DevRel experience—there's much less, or if there is terminology or jargon, it's very clearly identified in terms of what you mean by that. And maybe a good starting off point there is, you know, even the term "AI agent" itself is a loaded term. There's lots of different interpretations. And, you know, maybe not to say that there's only one that it can be defined as, but I'm curious—how would you define it for our audience, knowing that we're gonna go into much more detail into how these things are built, how the systems work behind them? [00:04:00]

Carter: Yeah, for sure.

And I think your point is well taken, right? I think AI has historically been the domain of very smart people working in the machine learning space. And for sure, most people are not capable of building foundational models themselves. So most of what we're gonna talk about is: how do you build software that leverages the incredibly complex and hard-to-replicate things that these machine learning engineers have done.

And agents is a great place to start. So what is an agent?I think that there's probably a really fully loaded definition of what an agent is that would suit someone that has been in AI for the last six to seven years. I myself have maybe a more humble definition of what an agent is.

And it just kind of goes back to the roots of building software, right?

Like, you know, when you normally build traditional [00:05:00] software, it's completely deterministic, right? You've got some for loops, some conditionals. For the most part, you understand how your program is gonna operate.

Agents are different. Agents take advantage of LLMs. I think at this point everyone is familiar with an LLM. If you're not, go check out chatgpt.com—you'll get a feel for it.

So, an agent consists of an LLM, but crucially, it consists of an LLM that has access to tools. And we can kind of get into what kind of tools, or what are the tools good for, and how do they use them.

But because you've given the LLM access to tools, the LLM now has agency. It has agency in terms of how it can respond to the request that has been made to it by the end user.

It can see what tools it has at its disposal. It can choose to use none of the tools, several of the tools.

It can combine the output from one tool and turn that into the input to a second tool. And to my thinking, this is what an agent is. I think that you can layer more and more powerful capabilities on top of that concept to solve all kinds of use cases. But I think fundamentally, if you're dealing with an LLM that is non-deterministic in nature, that has access to tools that could live on your laptop or live in the cloud, and it has agency to execute those tools, recombine them in order to achieve a goal— you've got an agent.

Reed: That's a great definition. I think your point of starting kind of simply there, but also capturing a very large swath of the innovation that we're seeing right now—when you try different AI agents out, I think virtually every one I've used meets that definition. So it's good to kind of ground ourselves in that.

And so, you know, one of the things I'd like to maybe do now is dig a little bit more into: what does this look [00:07:00] like in terms of the development process? Go into the inner workings of how an AI agent actually operates, and just make sure the developers understand that—both conceptually, but also how they'd actually start to think about building them.

Carter: Sure. Sounds good. Should we pop into demo land?

Reed: Let's do it.

Carter: Yeah? Okay, let's do it. I'm gonna go ahead and share my screen. All right. And just let me know when you can see it, Reed.

Reed: Yep, we're all set.

Carter: Okay, fantastic. So look, I think at this point there are quite literally dozens of tools developers can use to build agents. I work on the Langflow team, so what I'm showing you is Langflow. This is Langflow Desktop.

It's a native macOS application that I'm running on a Mac laptop. It's completely free and open source. You can download it at langflow.org.When you install Langflow, we give you a bunch of templates that you can use to kickstart your journey. [00:08:00] One of those templates is called "Simple Agent." So why don't we go ahead and see what Simple Agent looks like?

Carter: This is Langflow—a bunch of boxes and arrows. It's a visual experience for designing agents.And this is a quick little readme that helps get the developer oriented, but I'm gonna go ahead and just dive into what we have here.

So here, as I mentioned, the fundamental core of an agent is an LLM. So here we've got "My Agent." In this case, I've wired it up to OpenAI, but you have access to all the major models—and within the OpenAI family of models, I've chosen GPT-4 0.1. In addition to wiring up the LLM, you want to give the agent a set of instructions. Now, in this case, this is just what comes as a default: "You are a helpful assistant that can use tools to answer questions and perform tasks."

It's important to understand that in the real [00:09:00] world, you will provide probably significantly more complex instructions for your agent that help bound its behavior and certain outcomes. But in this case, we're gonna start with this very simple idea.

Carter: Now, I mentioned that what makes agents useful is that they can access tools. So you see right here, this is where we can wire up tools—and for demo purposes only, because this is not something that you would use in real software, I've wired it up to both a calculator and a URL fetcher tool.

So what does that buy you? Well, let's go to the built-in interactive playground and let's go ahead and create a new session, and let's do some math, right? So let's say: what is 3.5678 to the eighth power?

Now, what's important about this request that I'm making as an end user is that LLMs are natively very bad at math.So when I do this, the [00:10:00] LLM is gonna realize, "Oh no, I don't know how to do this. I'm going to have to use a tool."And you can see in the debugging experience that it used the "evaluate expression" tool. And then, we go down here, and it shows you—boom—what the result is. And then you get the output from the agent that the answer is approximately 26,254.52.

So, great. Now let's ask our agent for something—oh, did you have a question, Reed?

Reed: You know, will be a dev question for some, but I'm here to ask the dev questions as the MC on this video series. I'm here for it. Curious about— I actually don't know the mechanic by how the LLM decides what it knows and what it doesn't know, like when to invoke tools. And I'm just curious if you have...

Carter: Oh yeah.

Reed: Insight into like how that...

Carter: I absolutely do, and I'd love to share [00:11:00] that. So all major foundational providers support tool calling. And you can imagine that—and I guess I can go ahead and share this—everything that you're seeing visually here in Langflow, behind the scenes, it's all Python, right?So there's no magic happening. This is just a really accessible front end to fundamentally a Python application.

So what's happening? In Python, we're calling OpenAI's API. They have a RESTful API that you can use called chat.completions, right? That's what we're doing.

Carter: This is a chat completion API. Because we've wired a calculator tool and a URL tool to this agent, when we make the RESTful API call to OpenAI, we tell it that it has access to a calculator tool and a URL tool. And very crucially—[00:12:00] we’re very specific about it. The tools have names and they have descriptions. And they have actions. So this information is passed via the API call to OpenAI. Then it is OpenAI—the actual foundational model itself—that decides: “Oh no, I can’t answer this question, let me send a response back to the caller that effectively invokes one of these two tools.”

So that’s what happens. In this particular case, if I go back to the playground—OpenAI realized as soon as it received this query that it didn’t know how to answer it. It then sent a response back to us telling us to use the evaluate expression function, with the value being passed in right here.

Does that make sense?

Reed: Yeah, that's a great explanation. Thanks, Carter.

Carter: Yeah. And look, honestly, tool-calling in some ways predates agents. Or it was perhaps the precursor to what we now sort of are familiar with as agents. So just to sort of pay off the rest of the demo, let's ask something completely different, right?

What kind of news— Let’s see—what kind of news is there about Stytch, very famous startup, in the past month?

So, importantly—exactly like what we said before, Reed—this query is going to OpenAI. It’s going specifically to their 4.1 model. And the model is realizing: “Oh wow, I simply am not trained on data from the internet that is as recent as the past month.” So it has to now execute a different tool called fetch content.

And this is actually quite [00:14:00] magical. I,as the programmer—the person who built this Langflow app—I actually did not provide it with any information about something called news.google.com. That kind of knowledge is simply baked into the foundational model itself.

So it has knowledge of news.google.com. It then passes in a search query of “Stytch,” and it comes back with a bunch of information. And that information gets packaged and returned to the user. We can see that—here are a couple news highlights about... let’s see, some Plaid, Alni hiring from Coinbase, Exploding Topics...

Yeah, so it kind of used Google News to find this information.

Reed: That's very helpful. And one thing I was curious about is—because those articles were from about a year ago, because I remember them, they were about your recruiting folks—I'm curious, like, obviously there's some tuning probably that happens. How do you make sure it actually gets you the last 30 days? Because I'm sure there's a lot of this in agent development where it’s like, you do one thing and then you iterate it to that exact meaning you had.

Carter: Yeah, so I think this is where a bunch of different things come into play, right? Agent instructions can be helpful here.

I'm using a built-in—rather naive—tool, which is a URL fetch tool.There are other tools that are specifically tailored for agentic crawling and search retrieval on the internet.

So one of them that we're actually going to look at in a little bit is called Tavily. Tavily is another startup that has a search API that is integrated with Langflow. So I think we’ve kind of stumbled into one of the downsides of non-deterministic applications, right? In agent building.

And ultimately, from a technique perspective, there's just a lot of [00:16:00] testing.And you're going to hear—we talked about jargon before—you’re going to hear terms like “evals.”

So developers that are building agents, or working with LLMs, are going to want to build evals.

And what is an eval? That’s just jargon for tests. I honestly don't know why the AI industry insists on creating new terms for old ideas, but evals are just natural language tests. And we're actually going to look at some tools that can help you do that in a little bit.

Carter: I think this is actually a great jumping-off point to a more realistic and interesting agentic example. You wanna dive into deep research?

Reed: Let's do it. You might be teeing this up, but I was just gonna say—I’m very excited to understand this in more depth, because I use the deep research agentic use case with [00:17:00] ChatGPT all the time. So maybe you’re already kicking off with that, but I’m very excited about it.

Carter: I think so, yeah. I mean, the reason I built this flow for us today was that people have used deep research—people with ChatGPT access—use deep research all the time. And it's interesting to think about, as a developer: how could you build your own deep research?

So first, before I really dive into this, let's just go ahead and use it, right? Let’s come up with a topic that we want to do some deep research on.

I'm gonna go ahead and delete my session and create a new session. Here we go.

We have a sort of a built-in query for us, right? "I'm researching the effects of screen use on children. I'm a father, I've got two teenagers—this is very relevant to me. Help me build a report covering the last seven years."

So I'm gonna go ahead and kick off this query. And I'm gonna close the interactive playground.

One thing that’s kind of fun about Langflow is [00:18:00] that you can see what components are running at a given time and you can follow the flow of execution as it goes down the flow. But I'm gonna go back to the beginning, because I want to show off what we've done.

So we’ve decomposed this particular research problem into one, two, three—I think almost five—agents.

Let’s dig into the first one.

Carter: This agent is wired up to—so every agent in your application, in theory, could be wired up to a different provider and a different model. This is really interesting for multi-agent use cases because you do not need to use powerful or expensive models for every step in your process.

In this case, I'm actually using GPT-4 0.1 Nano. In some downstream steps, I’ll be using things like GPT-4 0.1 Mini, but you could imagine for certain portions [00:19:00] of your workflow, you could use completely free, off-the-shelf, open-source models—so long as it suits your needs.

So in this case, let's look at the agent instructions. This is a little more robust: "You are a research planner. You're going to break down the user's question into two to four sub-questions, and then you're going to return these sub-questions in a markdown format."

So I probably could have used a relatively free or cheap open-source model to do this.

And then once you've done that, these sub-questions are passed to a model—or to an agent—that I'm calling my research agent.

We can take a quick look at some of its instructions, right? So it’s a research agent. It has access to search tools. For each of the sub-questions, I wanted it to get a bunch of sources of information, which include the title, the URL, a short summary, and so on. And I also want it to return that data in [00:20:00] markdown format.

Carter: So, Reed—very quickly—I mentioned Tavily Search.Tavily Search is a really great, professional-grade, enterprise-grade search product.So I’ve created a Tavily account, wired up my API key, and connected it to my research agent as a tool.

Another thing I want to go ahead and show off—because it’s pretty fun—I want to talk a little bit about this question of observability and evals. And feel free to dive in anytime, but this is a tool called Arize.

You might notice in my Langflow—or in my flow—I’m not using any kind of Arize component. That’s because Arize and several other observability tools in the AI space are bootstrap. They’re bootstrap off of setting API keys in your AI [00:21:00] applications.

So because I created an Arize account and an API key, I was able to set an environment variable with the API key. And then after I do that, I get all of this observability for free.

You’ll notice—let’s see, what is it—it’s August 1st when we’re recording this. We’re doing this at 12:29. So this is live output from the deep research flow that I’ve been executing.

And I don’t have time to go into it now, but check this out right here—Arize gives you the ability to create these things called evals.

Carter: So, in your production applications—where obviously the reliability of the response from your agents is really important—you can use tools like Arize and other similar tools to create these evals that run as your agents run and provide real-time telemetry on whether the responses [00:22:00] from your agent are succeeding or failing. It's really, really interesting.

Reed: That’s very cool. Both your point about evals—the jargon that we use around them—but also, I’ve heard of Arize, I’ve heard great things about it, but I’ve never actually seen it in action. So great context for folks building agents in terms of how they can plug this into their workflow.

Carter: Yeah, it's a really interesting tool. And as you can see here, you're just getting a ton of visibility into all the things that are happening behind the scenes as your agent is running.And, you know, Reed—we talked about it—people were building software before AI, right? And software engineers have been developing best practices over time.It started back when programs ran on a PC, and it evolved as our applications moved to the cloud. And it’s evolving now as we’re building with AI and building agentic applications.

But fundamentally, this isn’t significantly different than logging, right? The same kinds of ideas developers have been used to for a long time. However, because the actual execution can be so non-deterministic, the need for logging and observability is significantly higher than it used to be.

Logs, in the old days, were maybe only really useful when something went wrong and you needed to understand what part of your system had failed. Now, logs are necessary even when things go right, because it isn’t binary—it isn’t black and white. You can have a spectrum of outcomes that range from completely broken to working perfectly, and all the gradations [00:24:00] in between.

Carter: So let’s quickly go back to this question, right? I want to see what happened. Oh, fantastic—great. So at the end of this deep research execution, we now have a pretty thorough report on the effects of screen use on children. You can kind of read this all yourselves.

But I wanted to show off another fun thing. Here at the very end, I had a notifier component.Let’s just go ahead and open it up.So here, I tell the agent that it’s a notifier, and I tell it that if the report contains the word “Langflow,” email a report in HTML format to carter.rabasa@ibm.com. And you can imagine—of course I’m being a bit cheeky because this is a demo—but you can imagine [00:25:00] in the real world, if you’re building agents that are combing the internet for news, there might be certain things that you’d want to use to trigger a more urgent response to an end user.

So if we—and this definitely works—if we wanted to... In fact, I’m gonna go ahead and kick this off. Let’s go ahead and create a new session and say: “Please provide an overview of the most popular AI and agentic tools for developers over the past two—”

And we won’t sit here [00:26:00] and wait for this to finish running. Another thing that I thought was really interesting to talk about with all the folks watching: You might notice that the execution time for these agents is measured in double or triple-digit seconds, right?

That’s very different than the experience end users are typically accustomed to when they use applications on the web or on their mobile device. So one of the things that’s really interesting about agents is thinking about the different channels—and the different asynchronous channels—that users can use to engage with agents.

So if it’s cool with you, we could quickly switch over to a fun travel-planning demo and just show off how that works from an email perspective.

Reed: I love that. Yeah. And to your point, I’m very [00:27:00] interested to hear how you think about those async channels and how to think about the user experience on the other side of the agents you’re building, given this latency.

Carter: Yeah, for sure. So I’ll go ahead and pop open Outlook. ’ve got a couple of emails flagged.

Here’s an email that I sent to an agent that me and my team built. And by the way, this is available for everybody—so you can go ahead and send an email to travel@emailagent.ai.

And this is a real trip, Reed. Me and my family—we’re going on vacation next week.So I asked the agent to create a travel itinerary for a trip from Seattle, Washington to Tofino, BC on August 3rd. I provide some context: I described that this is a family of four—a couple parents, a couple kids—and gave clear guidance on how to get there and things to do in Tofino.

So I provide all this context. And what’s nice about email as an interface for agents is that it’s [00:28:00]—simply by nature and definition—asynchronous. I’m not sitting in my inbox waiting for the agent to get back to me.

And I think for developers thinking about how they want to leverage agents, and what kinds of use cases fit into agentic technologies and techniques—these are fantastic.Ultimately, you can really think of an agent as a member of your team, right? Just like another person that has access to tools and can do things for you.

So in the same way that you would ask a colleague or someone you work with to perform a task—you wouldn’t expect that task to be completed in 30 seconds or 1 minute, right?So email is a fantastic channel. Things like SMS are fantastic channels for doing this.

But in any case—I sent off this [00:29:00] email, and here is the response.It gives me a very detailed itinerary—things I didn’t even ask for.It gives me a forecast, packing suggestions—a really thoughtful travel itinerary.

And once again, if we dug into the code behind this, I think you’d see some pretty thoughtful agent instructions that provide some guidelines and guardrails for what the agent should do.But what’s really great here is that you don’t have to stop.

Carter: Like, email is not just asynchronous—it's also alive. So here, the agent prompts me to say, "Hey, let me know if you need anything else." I reply on my own timeframe, by the way. I say, "Yeah, can you recommend some family-friendly hotels that we can stay in?"

And once again, at a reasonable timeframe later, I get a response with top family-friendly hotels in Tofino, with names, highlights, and websites for me to book with. So yeah, this is just a little example of how you can leverage email, SMS, and other asynchronous channels to deliver experiences that end users are perfectly happy with—because they’re not sitting in front of a webpage twiddling their thumbs. So I’d say it’s a really interesting technique for developers to keep in mind.

Reed: An interesting question I was gonna ask there—because I do think the technology is so far along that it really feels like, obviously, there are a lot of use cases people are already embracing as consumers—but it feels like the technology is there for just a few tweaks on UX to take it to another level of adoption.

I’m thinking about that experience you just walked through, where you were planning a trip to Tofino. I’m curious—just personally, outside of that—kind of using an agent to research parts of the itinerary, get ideas... what else did you go and do without agents or AI?

Like, I know what I would do. I’d probably go look up those restaurants on Yelp or Google Reviews and decide where I wanted to go. I’m curious what else you looked at.

Reed: And the main reason being—I’m hoping to provoke a little inspiration for folks when they’re building agents. To think through not only this really valuable leap from synchronous to async, but also: when I get that async information, what other data avenues or sources might I be going to, to fill out the full answer the agent didn’t give me on the first pass?

Carter: Yeah, for sure. I mean, that’s a great question. This is what the travel agent looks like itself. You can see we’ve broken down the experience into a bunch of subagents. So we have a city selection agent, a local expert agent, a travel concierge agent, right?

I think if you’re the developer, ultimately you do need to figure out what you think the boundaries are of what your agent can and cannot do. And look—what’s both scary and exciting about working in this space as a developer is that the capabilities of agents, and things like MCP, are growing and changing on a daily basis.

A thought you’re provoking is—sure, I want to figure out where we want to eat. And perhaps I would want to ask the end user: what sort of ratings or review services do they trust? I know that I’m kind of a Yelp person. But I know other people are more like Google Reviews people. So there are definitely situations where the agent needs to get clarification.

Carter: It would want to ask the user for some guidance, and based on the user’s response—if I say I’m a Yelp person—it would look up Yelp reviews and give me a sense of where I should be eating in Tofino based on Yelp.

Then take it a step further. I think this is what people really want. It’s what they do in real life, and it’s what they want agents to do.

Then I get to the point where I’d want it to book a reservation—for our final night in Tofino, for the four of us.

And yeah, I’d want it to take care of it completely for me. What would that require? It would require the agent to be able to act on my behalf and potentially call the restaurant, make the reservation, or interact with the restaurant’s reservation systems. And remarkably, all of these things are actually possible.

I’m gonna go ahead and stop sharing so I can get back to our conversation.

Carter: But Reed—these things are real. I’ve seen Twilio, a company I used to work at (I think I mentioned that), recently launch a really powerful API called Conversational Relay.

It effectively enables developers to build real-time, voice-powered experiences with AI. And then of course, they can plug that into Twilio’s ability to actually make real phone calls. This can create some discomfort for people—the idea of an AI-powered virtual person calling a restaurant and engaging in a conversation.

But I think ultimately, we’re just going to become more and more comfortable with these things because the outcomes are really positive for everybody. If you think about the restaurant in Tofino, they want me to be able to book a reservation with them. There's an economic incentive for them to do this.

And there's a desire for me as an end user not to spend 30 to 45 minutes on hold or on the phone, right? So yeah, I think people’s comfort levels are going to increase over time. Same with the financial aspect. I think people are going to want agents to be able to pay for things and buy things on their behalf.

And I’m sure you know this better than I do—but in both the agent space and the MCP space, there’s a really robust conversation happening right now around authentication and authorization. How is this going to work?

And I’m sure Stytch and other companies that build in this space are figuring out exactly how this will work—what developers need to do to enable these experiences. Because once again—from the end user perspective—once I’ve established a degree of trust with an agent, sure, yeah, I’ll give you access to my credit card.

Or who knows, I’ll give you access to a virtual credit card that has some hard limits on it—something a new credit card provider creates specifically for consumers that are in this AI-friendly space. So yeah, we’ll probably talk about this as we wrap.

Things are changing all the time. Developers are constantly pushing the limits of what’s possible, and I think tool builders and service providers are responding to this. We can have this conversation in a month, and there will probably be a step-function difference in what agents are capable of.

Reed: We're gonna have to bring you back when GPT-5 releases in the next week or so, because—you know—we're sitting here on August 1st recording this, and I think all of these primitives will still be extremely valuable and valid, but it'll be interesting, to your point, to see the Cambrian explosion that comes from that in itself.

And so with that, I kind of wanted to just wrap with an overall question around how you think about building AI agents and agentic experiences for consumers. Any last takeaways you'd share with the audience and with developers that are looking to either learn more or figure out how they want to navigate this new world?

Carter: Yeah, sure. Well, just—listen to my t-shirt, you know—just run MCP. No, that’s a joke. But honestly, beware of people who act like everything is fully baked and that if you just learn one thing, you’ll be fine. Things are changing super fast.

It’s a good-news-bad-news situation. Developers have never been under more pressure to understand how these technologies work and how to build with them. But there’s also never been more opportunity. Developers can completely reinvent themselves by learning how these tools work now—because we’re so early.

Nothing’s fully baked yet. I think Langflow’s a lovely tool, but it’s actively, almost chaotically, under development—and so are all similar tools. All of the frameworks and platforms in this space are iterating fast. There are constantly breaking changes.

Carter: So developers need to have a level of tolerance. These tools aren’t stable, they aren’t finished. If we just rewound three years ago, and you told me as an engineer, “Carter, I need you to stand up a SaaS application that does X, Y, and Z,” I would’ve had a lot of clarity on what I needed to do. There wouldn’t be much confusion.

That clarity doesn’t exist today. And I think it’s partly because the scope of use cases for AI is exploding and constantly changing. In the past, if you told me to build something, I could almost instantly tell you whether it was feasible or not. We all had heuristics for what was possible in a web app.

Those heuristics don’t exist in a fine-tuned way anymore. So now, developers not only need to learn how to build—they also need to learn what’s possible. And increasingly, the answer is: it’s possible.

Carter: You could throw a crazy request at me that I would’ve laughed at a few years ago, and now I’d probably say, “You know what? I think we can do that, Reed.”

So my advice—my super pragmatic advice—is: just play with these tools. You need to understand the scope and art of the possible. You also need to play with them because not every tool is right for every person, company, or use case.

Reed: Mm-hmm.

Carter: People who build products—you build products at Stytch, we build products on the Langflow team—we know that our product is tailored to a particular kind of user and excels at specific use cases.That’s how product differentiation works. There won’t be one tool that all developers use to build agents. That’s just not going to happen—the diversity of use cases is too broad.

So you need to figure out: what’s your style of AI programming? What are you trying to do? What are your company’s goals and use cases? Then go try the tools out. And unfortunately, there's a lot to try. Right now, we’re just talking about agentic tools—but there’s similar diversity in observability tools, infrastructure tools, vendors, everything.

In Langflow, I showed you how you can wire up agents to tools. We have a built-in palette of available tools—but that palette is only a tiny slice of what’s out there. There are entire companies building tool gateways for agents. So instead of wondering, “Does Langflow have a Twilio tool?” you can use something like Arcade.dev, pull in their component, and through that gateway get access to hundreds or thousands of tools.

There are vendors building across all of these different dimensions. But ultimately, I think it’s great. Choice is good for developers.

I remember before there were solid competitors to OpenAI—what you could do with AI was largely limited by what OpenAI offered. Now you have Anthropic, and a really rich set of players in the space.

And as a developer, you benefit when companies are competing to serve you. That’s the good news. The bad news? You’ve got a lot to learn. But the good news again? These companies are working really hard to make your life easier, to make you productive, and to help you build what you want to build.

Reed: I think that’s a wonderful note to end on. You did a fair amount of foreshadowing of what we’ll be covering in future conversations, which is great.

What else is there to learn? What other tools are out there as you start thinking about not just building an AI agent—maybe with Langflow—but also about using tool gateways like Arcade.dev. They’ll be part of the series.

How do you think about observability? We’ll be talking with Honeycomb. Thinking about authentication and building MCP servers? We’ll talk with Stainless and someone from Stytch as well.

So I don’t think I could have set up the rest of the series better. Finally, I just want to say thank you, Carter, for joining us.

I hope folks will reach out to you or check out Langflow to explore what’s possible.And I want to emphasize that closing note you shared: a lot of the learning happens through playing with these tools and continuing to do so.Even as models evolve, something that seemed impossible six months ago might be totally doable now.That’s something we’ve seen in our own agent and AI development work—so I just want to plus-one everything you said.

Carter: For sure. And one final comment: I want people to understand that it’s not too late. Some people think, “Oh, I missed the wave, I’m too late to get into AI development.” That couldn’t be more untrue.

There’s never been a better time to learn how to build with AI. And honestly, from what I’ve seen, the folks who are really putting in the time and figuring out how to bake this stuff—not just into their work, but into their own side projects— those are the engineers who are going to define what the next generation of software looks like.

Right now, the entire engineering industry is going through a shift. You’ll see people really learning how these tools work and branding themselves as builders of what’s next.

There’s just never been a better opportunity, if you’re an engineer, to figure out what your future looks like in this industry.

And I’m not pessimistic about engineers in the AI world—I really disavow that fear. I think agents are your [00:46:00] friends. They’re going to help you write code. They’re going to help you test code. They’re going to make you more productive—but only when, only once, only after you learn how to use them, right? So I just think there's a ton of opportunity for engineers.

I think it's super exciting. And thank you so much for having me. This was a good, positive conversation to have—and just, yeah, thanks for letting me have it with you.

Reed: No, thank you, Carter.

And I hope everyone joins us for the next sessions as well. There’ll be a lot more detail—just like we got from Carter—as well as demos.

So thank you again to Carter, and to the audience, for listening along.


Share this article