There’s a lot of smoke and mirrors in product right now—especially when it comes to AI. Shiny tools and slick prototypes are masquerading as production-ready solutions, and teams are feeling the pressure to keep up. But what happens when the hype outpaces the fundamentals?
Hannah sits down with Matt Graney, CPO of Celigo, to talk about bad product plays in disguise—from vibe coding and no-code illusions to AI-fueled shortcuts that chip away at real product rigor. With decades in B2B product and a track record scaling teams, Matt offers a sharp, grounded view on what’s actually changing, what’s staying the same, and how to keep your product sense intact through it all.
What You’ll Learn
- Why fast ≠ production-ready—and how to communicate that to stakeholders
- The hidden risks of over-relying on AI tools in product work
- What scaling a product org actually looks like (messy metaphors and all)
- Why old-school PM practices might be more relevant than ever
- Where AI really shines—and where it still falls flat
Key Takeaways
- Beware the prototype parade: Just because something looks slick doesn’t mean it’s built to last. Speed-to-demo isn’t the same as speed-to-scale.
- Use AI to expand imagination, not replace judgment: Let it surface blind spots and alternate angles, but don’t let it steer the roadmap solo.
- Ruthless prioritization is timeless: As building gets cheaper, deciding what to build matters more. Zombie features are real.
- Prototypes should be disposable: They’re meant to explore, not endure. Don’t build a mansion on cardboard.
- User research still needs human brains: AI transcripts are helpful, but nuance lives in the unsaid. Craft matters.
- Don’t graft processes without the culture: Tools like OKRs or LLMs can’t fix what’s broken underneath—they just spotlight it faster.
Chapters
- 00:00 – The problem with hype in product
- 01:19 – Meet Matt Graney, CPO at Celigo
- 02:53 – Vibe coding and Scooby-Doo moments
- 04:35 – Speed to demo vs. speed to production
- 06:34 – What if the cost to build goes to zero?
- 08:00 – When AI tools mask bad judgment
- 10:25 – Where LLMs overpromise
- 11:07 – Product shortcuts that erode trust
- 13:34 – Scaling product teams: lessons learned
- 16:28 – Old-school PM skills that still matter
- 18:30 – Where AI can really help (and where it can’t)
- 19:58 – Wrap-up + where to find Matt
Meet Our Guest

Matt Graney is the Chief Product Officer at Celigo, where he draws on over 20 years of product leadership across B2B software enterprises and startups to steer the company’s product vision, strategy and roadmap. Prior to Celigo, Matt held senior roles at Axway, Borland Software and Telelogic (now part of IBM Rational), and began his career as a software engineer in Australia after earning a B.E. in Computer Systems Engineering from the University of Adelaide.
Resources from this episode:
- Subscribe to The CPO Club newsletter
- Connect with Matt on LinkedIn
- Check out Celigo
Related articles and podcasts:
Hannah Clark: Things aren't always as they seem. From bait-and-switch subscription pricing, to the people who swear candy corn tastes good, we have no shortage of reasons to be skeptical of everything—especially right now in product. The speed of change in this industry is moving so fast that trends can start looking like best practices and the real best practices can start looking outdated. But when the goal is to build a product that will scale over time, it's on us to check ourselves from getting sucked into hypetrain hypnosis.
My guest today is Matt Graney, CPO of Celigo. Matt has spent over 20 years in B2B product and almost nine of them scaling the product team at Celigo from four to over 45 people. And while AI has proven to be an extraordinary opportunity for the company, he's noticed a gap between expectations and reality when it comes to some of the shiny new tactics blowing up our LinkedIn feed. From prototypes masquerading as production-ready code to too-good-to-be-true tools, you'll hear his take on where to tread with caution, and the time-tested wisdom he's counting on to make it through this AI transformation alive. Let's jump in.
Oh, by the way, we hold conversations like this every week. So if this sounds interesting to you, why not subscribe? Okay, now let's jump in.
Welcome back to The Product Manager Podcast. I'm here today with Matt Graney. He's a CPO at Celigo.
Matt, thank you so much for making time in your schedule to talk to us.
Matt Graney: Thank you, Hannah. Great to talk to you today.
Hannah Clark: So can you tell us a little bit about your background and your journey to becoming the CPO at Celigo?
Matt Graney: Yeah, so I've been in B2B product for a long time now, about the last 20 years. Celigo is an integration platform, and so I've been with Celigo for eight and a half years joining just after series A funding.
And another five and a half years before that also in, in the integration space. It wasn't really by design, it's just how things worked out. Before that, I was involved in products around the software development lifecycle, including UML modeling tools. For those of you who might remember what those were, originally I was a software engineer back in Australia working in telecom and defense.
And first came to the US to work as a sales engineer actually for a product that I'd become a power user in while I was at Motorola. So kind of a diverse background. And then took a tour of duty in product marketing and then finally made it into product.
Hannah Clark: A tour of duty. I've never heard it described that way.
That's funny. So today we're gonna be having a little fun with a very quasi Halloween themed episode. We're gonna be focusing on the theme of bad product plays in disguise. And we're gonna start by digging into vibe coding. So lots of opinions being tossed around in Vibe Coding in the product community.
And I'll be the first to admit, no code platforms are amazing tools. I use 'em all the time, but they can also lead to some very Scooby-Doo esque moments, mask off moments where we take a closer look and it's, you know, not what it seems. So what's been your experience, Matt, with vibe coding within product teams?
Tell me the good, the bad, and the ugly.
Matt Graney: Yes. And we would've got away with it if it wasn't for you, pesky kids. Right. So yeah, I think it's democratization, which is incredible. But we have to also think about the delusion, you know, with that power to make something look good. It doesn't mean it's necessarily built right.
And I think there's a lot that has to happen under the covers. We shouldn't confuse perhaps quick prototypes with production ready code. And so, okay, citizen creators, but you also need citizen architects. And in the context of the business we're in, for example, B2B, we're dealing with, you know, where infrastructure software, I mean we're talking about billions of transactions a month.
There are only certain parts of the application, perhaps on the very front end where we might be comfortable vibe, coding, anything. And so I think, you know, it's gotta be about the right tool for the job, just as it's always been. And, you know, while also making sure there's a culture of experimentation, making sure that we're encouraging the team to take risks, to try new tools and stay current with the latest developments that what is truly a groundbreaking time for the whole industry.
Hannah Clark: So just to go a little bit deeper on that. When we think about, you know, the various levels of understanding of the technology just all along the organization, you know, it can be very tempting for folks who are less experienced either with the technology or new founders themselves, to kind of see what looks like functioning code and just kind of run with it. And puts a lot of pressure on engineering teams to kind of match that speed or be able to develop at that level that quickly and make it look, you know, so shiny and new.
So how do we help stakeholders understand the real constraints and considerations between the prototype that we're seeing from Vibe coding tools and production ready software?
Matt Graney: Yeah, that's such a good point because, you know, speed to demo is not the same as speed to production and especially, you know, we're talking about 5,000 customers again with B2B workloads, so we sometimes talk about what we do, infrastructure as the plumbing.
So, okay, AI might paint the house, it's not necessarily gonna plumb the house. Right? You wanna be sure about some of these things. There are some challenges because whether it's pressure on engineering teams or even pressure from the exec team. Recently I've seen internally, you know, fairly senior members of sort of non-technical staff vibe, coding proofs of concept to show new capabilities that are much needed by their customers.
It's certainly fired the imagination, but it's also creating this unspoken pressure that maybe this is accessible to everyone, that this is something that we can rush into production. And you know, if we're talking about buildings, if I go back to painting houses or plumbing houses, I mean these are not necessarily load-bearing walls, right?
These are the facade. It might look good. And again, that's not to say there isn't a place for it because the speed of POC, it's just incredible. And we need to be embracing that at every turn. Whether that's to help a product manager better explain requirements, whether helping a designer to show alternative workflows.
Whereas before, they might've been going through designing many different screens in their favorite design tool. The power of working code is undeniable, and we need to be looking at embracing that at every possible turn while recognizing that it's not the same as production ready code.
Hannah Clark: And something that's been on my mind as well is that these are tools that invariably they're going to get much more sophisticated and we as users will also become much more sophisticated at using them, which means, of course, we're trending towards this cost of building, approaching zero.
So for yourself as a product leader, what are the concerns that you have about that trend and what have you been doing to mitigate those concerns at Celigo?
Matt Graney: Yeah, Hannah, I think that's a great mental model, a great thought experiment to run. Like what happens if the cost of building goes towards zero?
Okay, maybe it's fast approaching the generations of these tools, as you say, the ability of users, skilled users, just as we see improvements in sort of the usual ChatGPT kind of experiences. So we can expect a dramatic decline so become so cheap to build. I think that actually counterintuitively puts even more pressure on product managers to make sure we're building the right thing so it doesn't absolve us of all the right things we should be doing looking at product analytics, quantitative and qualitative user research, customer interviews, product advisory councils, all those things.
Proofs of concept, AB testing, working closely in a triad of product managers, designers, and engineering, right? So none of that goes away. And I think we have to guard against that because, you know, to go to a Halloween theme could help with a bunch of zombies running around, like zombie projects, things that were so easy to build.
Maybe littering the product with all these ideas that never really quite, you know, made it. And so I think again, that some of the older disciplines of product management really come back to the fore because we have even more choice now. I think, you know, the ability to make decisions, to drive the right kinds of outcomes, I mean, all that has to remain.
Hannah Clark: I would agree. And on the topic of maybe ill-conceived ideas. I have certainly been seeing an explosion of tools flooding the market that seem to offer, let's say, enchanting benefits while hiding what, maybe even, not even hiding, but just offering some very serious risks.
Like I've seen some fairly egregious concepts circulating that I could just so clearly see an opportunity for bad actors to just manipulate in a way that's not really what we want. So what are some of the things that you're seeing in the space that give you pause, and what role does product judgment still play that just can't be automated away?
Matt Graney: I think we've always been looking for maybe the magic eight ball of product management, whether that's scoring methodologies like rice.
I've seen them abused as well because. It still leaves a lot of latitude for product managers to have their thumb on the scale to influence the scores, and it doesn't take too many rounds of it to figure out exactly how to move the needle and tip the scale in your favor. So I think we'll see the same sorts of risks here as well.
I think there are some tools that promise maybe they're gonna vacuum up all the intel, you know, product, telemetry, every conversation there ever was. I think we all know there's plenty of things that happen out of band observations that maybe, yeah, maybe it comes from a session replay tool, but it's not like written down in a form that an LLM is gonna understand.
Right. So I think there's no substitute for sound product judgment, and while we are, maybe we have more data than ever. I think at the end of the day it's incomplete data. And there still needs to be a vision, a strategy, and in product management there are bets. And yes, we take bets knowing that we're gonna be able to measure the outcomes, hopefully if we do our jobs well.
But it still has to be an iterative process, and I don't think there's any magic answer that AI gives us to suddenly produce an infallible roadmap, put it that way.
Hannah Clark: Yeah, I tend to agree. I was Speaking to a guest who has not yet been on the show. It was coming up quickly, but something that we discussed was this concept that AI is kind of a jagged technology in which there are some things that it's very good at. You know, we can exploit those advantages, but then there's things that it's not so good at, but it looks like it's very good at.
And so, yeah, it's a matter of the product sense, but also understanding the technology well enough to be able to kind of check yourself on, like what are you really relying overly relying on the LMS to do for you?
Matt Graney: And I think even with basic chat interfaces, I think we've seen plenty of examples of some AI being quite sycophantic, right? And if you're not really awake to that, you might begin to think, you always have great ideas.
And so personally, I like to spice it up a little bit and make sure that I have sort of an alternative view and ask for a hypercritical review of the ideas I have. Because otherwise I'm always sounding like a genius when I talk to my AI. So
Hannah Clark: yeah, they love us, don't they? So let's kinda move past by pointing a little bit and I'd like to talk about some other, let's say like attractive but ultimately unsustainable shortcuts that we're seeing product teams take right now.
What are some of the bigger offenders that you've seen around?
Matt Graney: I think one of my favorites is OKRs. I think we've got sort of a troubled relationship with them, maybe at a company level, not too bad, but I think in product it, it can be a bit challenging and I think, you know, it's what happens when you try to import something.
From a fang, in this case, like one of the big name companies, without necessarily having the rest of the culture to go with it. I think that's in general, you just can't graft on a limb. Okay, now we're gonna talk Frankenstein's monsters. Right? If you don't have that sort of as part of the culture, these things are never really going to knit together properly.
Right? So that's one. I think there's also, we've all seen metrics theater, a bunch of vanity metrics that don't really tell us much or don't drive better decisions. I think that's a Gotcha. That's been around for a while. Maybe we talk about feature factories or feature farms, and maybe now we have the ability to farm by the acre, right?
Because of the scale, again, where our ability to produce is going up. How do we make sure we're producing the right thing? So I think any of these sorts of things are essentially shortcuts. Again, looking for magic solutions that apparently work somewhere because someone read them on on X or on LinkedIn somewhere, and without that sort of rigor behind them, it's just going to erode trust, I think.
Hannah Clark: Yeah, I would agree on the topic of eroding trust. I think that one of the ones that comes to mind for me is how it's affected the UX research community. I know that the UX researchers, I think, have long suffered as being sort of the underappreciated aspect of the product process. But now LLMs, I think we kind of get an even more muddled view of how to conduct that correctly and kind of the role of AI in assisting with that process.
That's one like, I wish I could shut it from the rooftops that you just cannot substitute user research with LLM.
Matt Graney: And our head of user research recently was telling me the same thing that the AI transcripts are great. Okay. You know, verbatim, this is what was actually said. But the insights often miss the subtleties certainly at the moment, and maybe that'll change.
I think as we say, there is continued generations of this technology. I think we can be optimistic about the future, but for now I think it's as most important to understand the limitations and guard against them through of fashion craft of good user research.
Hannah Clark: So switching gears a little bit, let's talk a little bit about your experience scaling teams. So you have been doing it for a while. You've scaled your team at Celigo from two PMs to 10 times that size. You've got lots of experience in building processes. I'm sure that there have been missteps along the way.
So can you tell us a little bit about what has been sort of your, or a few of your best takeaways in terms of scaling teams from small organizations throughout their maturity? What have you kind of learned that you think still holds true even today in this fast moving time of AI?
Matt Graney: Yeah, so it has been a journey, as you say, Hannah.
So I joined the company, inherited two PMs and two tech writers. So team of four. Now we're, you know, about 45, close to 50, right? So it's been a lot and it's a team of PMs, designers, researchers, technical docs, and product operations. I think what I've learned is maybe the order in which to do things right.
So at the very beginning, life was simple, you know, by my side on the more of the platform side, working directly with our CTO. And three of us around a room prioritizing an entire backlog, right? I mean it was as simple as that. But that clearly doesn't scale in the long run. You know, dealing with offshore teams, both PMs and engineering, being offshore, complicates things.
And we gradually added process at the beginning, design was just the best we could do with the tools we had. So it was PMs doing their best, you know, I always think it was almost like stitching together, screenshots, like a ransom note that's kind of so grungy and almost embarrassing. And then really, I think our first foray into, you know, professional designers, we didn't do that well.
I felt like we used design more like a, an agency model, Hey, make this look pretty, instead of really thinking about it as user experience as opposed to just design. Stocks, we've always been fairly good and that has continued to evolve. So we've added process as we needed it. I'm not saying we're perfect, but it's tended to work for us pretty well and maybe haven't got all the right ceremonies in place at times it feels like.
But I think sort of directionally it's been correct and it's been a case of just enough process and responding to the needs of the business. And providing room to grow for all members of the team and so on. But it's been a journey. I've never run a team this large before. Right. There are a lot of firsts here.
And I have some battle scars and gray hair to prove it.
Hannah Clark: Oh, really? Where?
Matt Graney: Oh yeah. I'll blame my kids maybe. Okay.
Hannah Clark: Yeah. You can blame everything on your children. I do it all the time. So to expand on that a little bit and kind of tie it in with what we were talking about before, I'm curious about some the tried and true product management practices that you think maybe are actually more important now than ever, even if they seem a little old school.
Are there any concepts or frameworks or practices that you found yourself returning to more than ever or emphasizing with your teams in the age of AI?
Matt Graney: As I said, you know, as the cost to build maybe approaches zero, I think it actually puts more onus on us to prioritize. So I'm thinking more in terms of the tools for prioritization.
So some of that has to be alignment around a vision and strategy, making sure that everyone on the team is clear enough to be able to make localized decisions. No substitute for firsthand contact with users, with customers, including, you know. I feel like I made my early days of my career with sort of hostage negotiation, like unhappy customers, talking them off ledgers or whatever, right.
I think there is no substitute for that because it really helps inform the full picture of what the product is about and gives the PM tools they need to better understand how they ought to be prioritizing. So I mean, there's no substitute for ruthless prioritization. At some level, you're gonna run outta capacity.
I sometimes look with envy at much larger companies, but I know somewhere they have exactly the same sorts of problems. I know all PMs do, you never have enough capacity. So it just comes down to prioritization and all the usual tools apply. I think, as I said, where AI comes into play, okay, maybe to help understand a whole and digest a lot of information indispensable when it comes to research, and then now as we talk about with vibe coding, when put in its place, you know, for rapid prototyping.
I think what some people forget about prototyping too, is the original ideas of prototypes is that they're meant to be thrown away. They're not meant to be the basis for what goes into production. And if you can do all those things, I think then, you know, the tools are there to really accelerate the way we work and again, to assist with prioritization.
Hannah Clark: Yeah, well said. Well, to close, on an optimistic note, where would you say are the most legitimate high value opportunities for AI to enhance the work of product managers? And what advice would you give to product leaders who really want to embrace AI in a way that's thoughtful and without falling into any of these traps or potential mask off moments we've talked about?
Matt Graney: Yeah, I think really one of the big ones has gotta be around research. I think the ability to do competitive research for a PM these days, it's a tool I wish I had in the past. Sure. I think part of it, of course, is vendors tend to be a lot more public, like most docs for products are available now, but you really need to be doing that.
That is a huge one. I think, obviously, quickly putting together documentation. A lot of people talk about writing press releases first, or FAQs first. I think this is a great opportunity. It might've seemed laborious to do that before, but it's a great way to get started today. I think it's about using tools like this to help expand the imagination.
What am I not thinking about? Right? I think AI, when prompted in the right way, can be really good at identifying some blind spots. Yeah, it might hallucinate sometimes, but even outta that, sometimes there can be insights or lateral thinking perhaps that hadn't come to mind. You know, I think AI though, can be a bit of a fun house mirror, right?
If it's messed up to begin with. If you don't have your discipline in place, then it's only gonna make it worse. But when done right, it can provide that focus that product managers need.
Hannah Clark: Yeah. I tend to agree. Well, Matt, this has been wonderful. Thank you for sharing all of your knowledge and for, you know, sense checking some of these things that we're seeing so much of in the space.
I really appreciate it. Where can folks follow your work online?
Matt Graney: Best to find me just on LinkedIn. I seem to be on there more than anywhere else, so look forward to catching up with people there.
Hannah Clark: Awesome, thank you so much.
Next on The Product Manager Podcast. If you thought this episode went hard on expectations versus reality, we are about to deep dive into LLM technology in an episode that will challenge everything you think you know about AI.
While the potential of the tech is limitless, the current limitations are far more complex than we realize, and so are the impacts on us as both builders and users of AI products. This one is going to hit hard, so subscribe now to jump in with us next time!
