8x NVIDIA B200 instances are now available on-demand! Launch today 

Lambda and Cologix debut NVIDIA HGX B200 AI clusters in Columbus, Ohio

AI infrastructure is no longer confined to the coasts. With Lambda’s one-click clusters, Supermicro’s energy-efficient systems, and Cologix’s carrier-dense facilities; Columbus, Ohio has become home to the Midwest’s first NVIDIA HGX B200 powered AI cluster. In this Data Center Frontier's podcast, Lambda’s Ken Patchett - VP, Data center infrastructure, and Cologix’s Bill Bentley - VP of Hyperscale and Cloud Sales, discuss what this deployment means for the future of regional AI infrastructure, why the “aggregated edge” is becoming essential, and how flexible, scalable GPU access is reshaping enterprise AI adoption across industries from healthcare and manufacturing.

Audio Transcript

0:06: Hello and welcome back to the Data Center Frontier show where we explore the evolving digital infrastructure powering tomorrow's world.

0:13: I'm Matt Vincent, editor in chief with Data Center Frontier, and, today we're diving into a story that speaks directly to the rapid decentralization of AI infrastructure.

0:26: And what it means when powerhouse technologies like Nvidia's HGX B200 platform, usually the domain of coastal hyperscale markets start showing up in the heart of the Midwest.

0:37: What do I mean by that?

0:39: Well, Cologix, Lambda, and Super Micro have teamed up to launch, Columbus, Ohio's first NVIDIA HGX B200 based AI cluster deployment.

0:50: The project combines Lambda's one click clusters with Super Micro's energy efficient hardware, and Cologix's dense carrier rich scale logic SM facility, call 4.

1:03: There in Columbus, so, welcome in, Bill Bentley, VP of Hyperscale and cloud sales with Cologix's, and Ken Patchett, VP, Data center infrastructure with Lambda.

1:18: thanks a lot for, joining us here on the podcast.

1:21: Yeah, thanks for having us on that much appreciated.

1:24: Yeah, thanks.

1:25: Super glad to be here.

1:27: Thanks, Ken.

1:29: thank you, Bill.

1:30: well, I've got 10 questions here, so I'd better just, dive right into them here.

1:36: so in my, the project that I described in the preamble marks the first deployment of NVIDIA HGX B200 clusters in the Columbus, Ohio region.

1:46: just sort of big picture, I'll ask both of you.

1:49: I mean, I think many in our audience know, but, why Columbus and why now, if you can talk about the significance of the, Of the region OK you wanna jump in?

2:02: Yeah, yeah, I'll take that first from the lambda perspective.

2:05: I, I think it's, it's important to understand that the shift to super intelligence is, is happening now.

2:12: Systems that can reason, adapt, and accelerate human progress, right?

2:16: This, this requires an entirely new type of infrastructure.

2:21: It means capital, it means vision, and that means the right partners.

2:25: Columbus with Kologics made sense for us because just beyond being centrally located, they're highly connected, they're cost efficient, they're built to scale, and it's important to understand we're not chasing trends here.

2:37: We're laying groundwork.

2:38: For a future where the intelligence infrastructure is as ubiquitous as electricity, partners like Kologics, we're bringing inviti the HGXB 200 clusters to the Midwest for the first time.

2:49: It's a major milestone for the region and frankly a major milestone for shepherding in the super intelligence age.

2:57: Thanks, Ken.

2:58: Bill, did you want to jump in there with some insights?

3:01: Yeah, so from a facility perspective, Columbus is uniquely situated where it's a rich intersection of long haul fiber, there's tax incentives that are available from the state.

3:11: It's also low-cost utilities that have all catapulted Columbus to one of the top US markets.

3:16: , add to that the increased density of hyper scalers that have deployed their local enterprises, manufacturing, and the Columbus ecosystem is ripe for growth.

3:26: So it's just a natural geography and location for AI workloads that are seeking geographic diversity without sacrificing performance.

3:38: Yeah, I wanted to ask you in terms of this project, is it, you know, does it represent sort of a signal that major AI workloads are, starting to shift away from the traditional coastal hyperscale markets?

3:52: I mean, does, does it change, you know, a, a deployment like this, does it change the economic geography of AI infrastructure in North America to any degree, or how do you characterize it?

4:05: Yeah.

4:06: Well, you know, I, I, I love this question because I, I like to think of lambda as an AI hyperscalar.

4:15: so, you know, the, the line between AI workloads and traditional hyperscale are blurred.

4:21: AI workloads are now an integral part of business for traditional hyperscale providers.

4:26: They partner with AI hyper scalers like Lambda, their customers, they jointly support end customers, so it's, it's a new ecosystem for for AI.

4:36: Yeah, I, I like to support that because I, I wouldn't call it a shift either.

4:40: It, it's more an addition to, so like, what does that mean?

4:44: So Columbus, it's not a tier two market anymore.

4:48: The biggest hyper scalers in the world have been building there for years.

4:51: So we're not abandoning one coast for the other.

4:53: It's really, the workloads are co-ming, excuse me, the workloads are co-mingling across all these regions now, right?

5:01: So like Lambda, an emerging hyperscalar ourselves.

5:04: Our megawatt footprint, we're multiplying by 4 between 2025 and 2026, and it's only getting faster.

5:12: So we're in Texas, Ohio, California, Illinois.

5:15: We're democratizing the AI infrastructure and we got to deliver it where it's needed and where it's needed is where we will bring it.

5:23: Thanks, Ken.

5:25: definitely, important, distinctions and, you know, to, to keep in mind, not to oversimplify.

5:32: so next question is, Cologix describes this, Columbus project as a hyperscale edge, deployment, I believe.

5:41: What, what does a term like that mean in practice, you know, and how is it different from traditional definitions of edge and, and hyperscale?

5:51: Yeah, to me, you got it.

5:55: You get the best of both worlds.

5:57: It's scalability, it's network density, it's accessibility to CSTs, service providers, and enterprises for interconnection.

6:05: The Columbus campus is the interconnection hub in the Columbus market.

6:09: It offers a unique ecosystem that directly interconnects our enterprise customers with service providers, whether it's carrier service provider, AI or AI specific workloads like Lambda.

6:20: and then, it's unique in that it sits on our interconnection campus, but it's a wholesale facility so we can support megawatt scale workloads like lambda, and, you know, their customers while still offering our bread and butter interconnection platform so it does it's both hyperscale and edge.

6:42: Yeah, I'll tell you, I tend to call it the aggregated edge like.

6:49: When you're really thinking about this, I've been working this for a while, this aggregated edges where all the workloads happen and they come to a place and 80% of the work needs to happen there.

6:59: Cologix is great in their location for that.

7:02: So this aggregated edge takes most real world AI workloads.

7:06: There's prints, we have fine tuning, smaller training tasks that occur.

7:10: They don't need the same setup as a frontier model training, right?

7:14: So about 80% of our workload, it happens at that enterprise layer.

7:18: So this kind of deployment, this kind of deployment, it really gives us the space and that we need to handle that work efficiently, which is close to the users and with real-time responsiveness, but still connected to like these large language learning models.

7:32: So that aggregated edge is gonna be a real important move as we start moving forward, and I think that That shift from what we used to call tier 1 and tier 2, it just doesn't exist anymore.

7:42: It's an aggregated edge.

7:46: Yeah, really important to keep in mind and I love that, term, by the way, but, so, Ken, this next question, will go to you and then, Bill, you can you can add on, but, so lambdas one click clusters.

8:02: We know promise rapid, provisioning and minimal complexity for end users.

8:08: How does this model impact the traditional procurement, integration, and deployment timelines for enterprise AI?

8:18: I think it's very important.

8:21: time to market is the most important thing for anybody in, in this world, and also from the enterprise, the, the ability to leverage and action these really expensive server racks and these, these LLMs that we have out there.

8:35: , from an inference standpoint, it's important to understand that one click cluster, it reduces time to value for the enterprise.

8:44: Well, it used to take weeks, months, like lining up GPUs, getting data center space, provisioning software, configuring the systems.

8:52: We can do this nowadays.

8:53: Lambda has deployed the GPU AI compute platform infrastructure in places like Cologix's, to where a customer can now call up and dial up.

9:04: I need 248, 1634 and just keep doing the multiples, and they can get that in their hands in days now as opposed to months.

9:13: That's one of the most important things that we, we can really do.

9:17: And, you know, none of the enterprises that we talked to, they don't think they're immune from disruption from, let's say competitors becoming faster or leaning in and embracing AI.

9:27: They know that they need to move from their proof of concept to their production in record time.

9:32: So at Lambda, we've proven with point the cluster that we can deploy thousands of GPUs very quickly.

9:38: And the new large language learning model deployments were under 90 days, and that's the speed that AI development requires today.

9:44: One click cluster is days, new clusters and new data centers under 90 days, and we're doing really, really well at Lambda to, to make sure that our customers get their time to market as quickly as they can.

9:57: Absolutely, Bill, anything to add there?

10:00: Definitely because from a facility standpoint, Lambda's one click clusters change everything.

10:06: We now have a really unique category of customer like Lambda that requires incredible scale scalability from a user perspective.

10:13: So if Lambda has a large one click customer, then Cologix needs to be prepared to scale from 0 to megawatts within seconds, and that is a, a really new dynamic in the industry.

10:24: , and it raises the stakes for operational excellence from the facility management perspective too, so definitely something new that that has, you know, it changed the course of history from a data center perspective over the last few years.

10:38: But I gotta say, Bill, that, that's spot on, This is, this is such critical infrastructure.

10:47: It's some of the newest and latest and greatest enhancement in human technology in the history of the world.

10:53: And you can't run this in, in a place that is, doesn't have the capability to respond and work in, this kind of speed or with this kind of completeness.

11:06: It's like, you don't put GPUs in a quonset hut in the middle of South America.

11:11: In the rain and go, oh yeah, that's gonna be good.

11:14: It's not.

11:15: We have to find the right partners that are building the right data center with the right type of infrastructure that can respond and work with us.

11:21: We're learning where the boundaries on the envelopes are.

11:24: We're uncovering new problems with between facility and electrical usage all the time.

11:30: So it's gonna be, I think it's really important to understand that when we pick the places that we go as cologics, excuse me, as lambda.

11:37: Places like Cologix's, they have a strong facility and good facility engineering background, and they can actually help.

11:45: Solve the problems that are being uncovered by today's scale.

11:48: And at the aggregated edge, that, that problem is gonna be even more manifest cause most buildings historically weren't built to be able to handle shifting electrical workloads that that you see with AI.

12:00: Yeah, it gives a whole new meaning to to the customer asks for for flexibility in the data center.

12:05: No question about it.

12:07: Remember, the answer is always yes.

12:09: Well, Bill, you kind of took the words right out of my mouth, there.

12:16: next question, again, I, I guess we'll begin with Ken, but, it's about flexible, consumption models like Lambda's GPU flexible commitment, from a business model perspective, how does an offering like this change the financial equation for AI adoption?

12:34: But, as I alluded to earlier, it, it lowers the barrier to entry, and, and it opens the door to people who otherwise wouldn't have been able to participate in it.

12:44: Lambda believes very strongly that the, the, the right to access the technology is a fundamental human right, and we really, really love that we're actually allowed to lower that bar and bring the people in who otherwise wouldn't have been able to do that.

12:59: So access to this kind of compute power is most important.

13:02: It's been historically reserved for companies with super deep pockets.

13:06: Lambda's GPU Flex commitment, it changes that.

13:09: So now, like our startups, our researchers, smaller teams, they can tap into this infrastructure without this massive upfront investment.

13:17: And as an AI pure player, we unlock TCO and performance, and that's the most important thing that we can have.

13:23: It it's time and money.

13:24: So it's not just making it easier for the smaller players to enter, but it's also enabling the enterprises to access these hundreds of top-end GPU for their proof of concepts.

13:34: And then they scale that to thousands when they're ready.

13:37: So you don't have to make these big gigantic upfront commitments.

13:40: You can try it out.

13:41: You can do your proof of concepts, you can fine tune what you're doing, and then you can get big.

13:46: So Lambda really believes, and we've seen this historically from the early 90s all the way through in all of our technological history.

13:53: You start, you grow, you find those partners you work with and then you explode.

13:57: And we really believe that there's tens of thousands of companies out there who otherwise wouldn't have had access to this that will today, and the products that are going to be created are just, they're going to be amazing and we're the foundation for that.

14:09: Thanks, Ken.

14:10: Bill, any note to add there on the flexible consumption model for GPUs?

14:15: You know, I would echo a lot of what Ken just said.

14:17: It, it, you know, from, from our side, it enables smaller innovators to, to be able to adopt, you know, the, the, the AI technology and then for the larger customers, it enables them to scale faster and for us at Coogics, we support customers that are on, you know, both sides of the spectrum, large and small, so.

14:37: , you know, I think it's unique in that it, it, it enables that whole spectrum of customers to access those types of services.

14:45: Thanks, Bill.

14:46: next question is, is coming back to you, Bill, but then, Ken, also please feel free to, add on, the question is how does the integration of super micros energy efficient systems into, this deployment, impact power density, cooling design, and sustainability metrics at call for?

15:09: Well, said briefly, there is a dramatic impact from the use of systems that Supermicro has provided lambda, and we're working hand in hand to change the landscape of data center design with systems like Super Micro and Lambda implementation.

15:26: If you rewind in the not too distant past, the standard design density was for around 5 kilowatts a rack, maybe less, and hardware designed for these GPU services is now changing every 6 months.

15:38: So Lambda's requirements and supporting Super Micros designs are causing us to push the boundaries of power density and cooling design, and Lambda is pushing the envelope and Cologix is following.

15:48: So we're adaptable.

15:49: We're now for this deployment, we're supporting 44 kilowatt air cooled racks, but we need to plan for the next phase of design in parallel.

15:56: So we're looking at, or we're, we're currently constructing air to liquid cooling, hybrid capability, and then beyond that, we're looking at implementing dedicated liquid cooling designed to support some of the 3-year projection from some of these new products, chip manufacturers and the likes of Super Micro.

16:15: So the industry is, is It's a complete industry transformation.

16:20: it's not just innovation, so it's an exciting time.

16:26: Thanks, Ken, yeah, go ahead, Ken.

16:29: Well, I, I, so well, I, I love that innovation is just squeezing the same water from the same old sponge, right?

16:35: We're, we're working in a transformative environment right now, and it's people like Lamb and Cologics and other partners that we have that are working together to transform this industry.

16:47: I, I lived in a world in, in the early 90s when we were 2 to 4 kilowatts per rack, and we use the same data centers and we changed them a little bit.

16:56: We add a little more air, a little more water.

16:58: We just innovated in these data center space all the way to 44 kilowatts per rack, can't do that anymore.

17:04: Now it's a complete new transformation that needs to happen.

17:07: So what I would like to say here is, is This integration with Super Micro.

17:10: It's not just Super Micro.

17:12: We have lots of partners that we work with both on the server side and as well as in the, in the co-location and hyperscale space.

17:22: A rising tide floats all boats.

17:23: We all have to work together because this truly does impact the sustainability footprint that we have in the world.

17:30: So Super Micro has been a part of Lambda's journey since the early days.

17:34: Cologix has been a part of our journey.

17:36: Hardware evolves, time passes.

17:37: We need partners that can evolve with us.

17:39: But what we really want to do is understand that the sustain.

17:43: The ability piece here should be fundamental.

17:46: We should have unconscious competence in supporting our, our sustainability footprint.

17:52: So the people we work with, we want them to make, take that seriously.

17:55: We take that seriously.

17:57: And together, we're rethinking how our data centers are designed.

18:00: So we're not just tweaking the old model, right?

18:02: We're building for what's coming next.

18:04: Next, and when we do that, we are trying to build in our sustain our sustainability footprint with unconscious competence.

18:13: We're doing it and it just is that way.

18:17: And I'll add to that that there's definitely some extended value for Cologix with the implementation of these Supermicro systems too.

18:26: They support alignment with our ESG KPIs and just for example, continually reducing PUE, our scope 1 and 2 emissions, our water usage effectiveness across our footprint.

18:38: So, these systems that are in use by land to help the logic to continue achieving those goals year over year.

18:44: It's a circle, and it takes all of us to work together with it.

18:50: Absolutely.

18:51: next question, maybe start with Bill and then, and then can you can, build on Bill's answer, but I wanted to ask about, what role Cologixs' interconnection density in, the Ohio region, what role does that play in the performance of AI workloads that call for?

19:14: Yeah, so Columbus is.

19:17: , unique centralized geography.

19:19: It provides access to the intersection of long haul networks in the Midwest, and what that means is it's one of the best geographies for low latency access to, greater population density across the US.

19:33: And because of the interconnection capability on our campus, it also gives our customers the capability to interconnect directly with Lambda's AI workloads or pursue hybrid strategies with compute from traditional CSPs.

19:45: So, you know, it, it's, I think it just means that the, the network matters as much as the compute for latency sensitive AI firms.

19:57: You know, you know, what's interesting is, is in the industry, we've all called that NFL cities, right?

20:04: So either we're gonna change this term or Columbus has an NFL team that's looking to expand right now.

20:09: I'm not sure which is which, but maybe we can make that player right now.

20:14: you know, it It's really interesting, this interconnection piece, because it depends on the workload.

20:19: Sometimes, you know, everybody ask you like, well, what's your latency requirements, let's say for inference or LLMs?

20:24: And the answer is always, it depends, it depends on what is being done, what type of work, what enterprise type of work is being done.

20:33: So, whether you're doing infrared or you're doing full model training, your network is gonna have to shift.

20:38: You have dense, reliable interconnection like here, it means that we can serve a wider range of workloads without compromising or without having to move part of our business to one area of the state or somewhere else and then the other part here.

20:51: We can, we can serve both infra and large language learning models from an area that's highly connected.

20:57: So, you know, Bill, maybe we'll get to name that new NFL city or or something that would be great, or at least the team.

21:04: Yeah.

21:09: So, thanks for that.

21:10: So, so next question is, taking us to, I wanted to ask about customer and, end user type issues.

21:23: in your shared view, how will this, Columbus deployment influence sectors such as healthcare, logistics, and manufacturing, in the Midwest?

21:37: You want me to take that?

21:38: Sure, you go for a cat.

21:40: OK.

21:41: Well, AI is already integrated into all of these industries, whether people realize it or not.

21:48: Factory robotics, there's routing software, diagnosis, diagnostics of all types, right?

21:54: So, the difference that we're seeing now is scale and accessibility.

21:58: It's growing.

21:59: I exponentially.

22:02: So we're we're entering this era right now where regional AI infrastructure supports real-world use cases across multiple sectors.

22:10: So that's why I call this the aggregated edge.

22:13: It's bringing intelligence closer to the point of need.

22:16: Our one click cluster called logic is aggregated edge.

22:20: Location model, right?

22:21: It delivers AI where it's needed.

22:24: It's at, at or near the edge.

22:26: It's near the data.

22:27: It's where the work is being performed and where the data is being created.

22:32: And we have a secure, carrier-rich environment with, with Cologix there in Columbus, so it's a win.

22:38: It's a win for Columbus, it's a win for the Midwest.

22:41: Yeah, Ken nailed it there.

22:43: Columbus has tremendous industry density with healthcare, life sciences, finance, insurance, manufacturing, a number of others, but those are some of the major ones.

22:53: And rather than having to rely on transit from traditional tier one data center markets like Ashburn, Phoenix, Silicon Valley, there is now ultra low latency access to AI workloads and lambda's one click clusters.

23:05: And it just means that these models can be trained and deployed closer to where they're used.

23:13: Thanks Bill.

23:15: next question is, sort of about the nature of, the partnership between your companies.

23:20: is, is the collaboration here like, you know, a one-off showcase or is it really the beginning of more of a repeatable, playbook for regional AI cluster development across the cologic's footprint?

23:34: .

23:36: Well, it really depends on Bill's pricing model we, you know, I'm kidding you.

23:38: , it's, it's very much, it's, it's very much a playbook.

23:45: again, there, there's a lot of room for a lot of partners in the work that we're doing today.

23:51: Cologix is a very Very strong partner with LAMBDA.

23:55: We're working with them.

23:55: They understand where the market's headed.

23:57: They're trying to build to where those areas are so that Lambda has a place to land very quickly.

24:03: And so it's not just looking where the market's been, they're looking where the market is going, and that works for lambda.

24:09: So that foresight, that's critical, right?

24:12: As we expand our new markets, we need somebody to get there in front of us because we build an AI GPU compute platform.

24:20: And Cologix builds a place where that can be managed, run, and operated from, and we need them to be in the front of that.

24:27: So, again, we're lowering the barrier for entry to AI.

24:30: We're delivering this kind of self-service production-ready AI compute model, and it was available only on East or West Coast, right?

24:37: So we're solving critical pain points for enterprises that are starting to access the, the inference game and the enterprise level software that they can't.

24:46: So, I, I would say that in, in my opinion, Cologix has the commitment, they have the technical capability and they have the desire to do this with Lambda, and yeah, our playbook is to continue to grow.

24:58: Yeah, and as Ken implied earlier, I'm in sales, so I would say it's definitely repeatable.

25:04: , but from our experience with Lambda, we understand their data center stamps to support AI workloads and how to easily replicate that.

25:13: But the reality is that things are changing really fast.

25:15: The chip requirements are changing, the densities are increasing, the cooling technologies are evolving quickly, and Cologix wants to be in front of these technologies so that Lambda doesn't need to build their own data centers.

25:26: So we understand our partner strategies and we are thinking forward on ours as a result.

25:31: Got it.

25:33: well, we're we're getting to, the end of our time, but, the last question is kind of a big one.

25:38: So, I'll, send it to you, Ken first, and then, and then Bill, we can, you know, you can add.

25:46: But, so the question is, does the lambda model mitigate the hardware obsolescence curve?

25:51: And what I mean by that is given the pace of change in AI hardware, especially with Nvidia's generational cycles.

26:00: How do you really future-proof an investment like this while still delivering on value for today?

26:07: Yeah, that, that's a great question.

26:09: you know, the, the changing DNA of the hardware is, is every 6 months now.

26:14: And yet it takes you 5 years, you know, to plan, permit, and then build constructed and, and operate a data center.

26:23: A lot of people will tell you it's less than that.

26:25: But all that prework on land, land use and entitlement, it's about 5 years.

26:30: So, that's a big investment upfront, and then at the same time, the cost of the GPUs are, are very expensive.

26:36: So we have to build these data centers that can meet the needs of a GPU that's going to come of 3 or 4 or 5.

26:45: For 6 iterations before the data center actually gets built.

26:49: So that model doesn't work.

26:51: the old school way that data centers were built doesn't work anymore.

26:55: We have to think in the term of elements or things that will change within the data center.

27:00: So today's infrastructure, it has to be adaptable.

27:03: We have to change certain elements.

27:06: We have to change like the air volume, the air pressure, the air speed, the air temperature, but maybe we don't need to change everything else.

27:12: So the real important thing is to understand that when you build a data center that costs Tens of millions per megawatt to build, depending on where you are.

27:24: You don't want to strand that capital outlay.

27:26: And so you have to be able to quickly adapt to the changing infrastructure needs that are caught, brought on by different hardware that's coming online.

27:35: So we have to design for constant change.

27:37: Everything needs to be able to be adjusted and adapted.

27:41: And then more importantly, when you think about the GPU chips and what value that they bring to the game, There are some business users that require the latest and greatest, and they need it right now.

27:53: They need it as fast as you possibly can.

27:55: You need to be able to support that.

27:57: There are other ones that say, hey, we already have managed the unit economics of, let's say, 1000 H100s.

28:06: So I want to I want to work on my time and my money related to inference workloads.

28:14: So as newer needs go for higher density, higher speed GPUs, there are still lots of other workloads that take the already been deployed, ready to go, The H-100s that have been deployed and we're seeing users use those for years now.

28:32: So it's not just 3 years or a year or a year and you throw it away.

28:36: The work requirements are myriad, and it goes from the oldest chips all the way to the newest, and everybody has different needs, and it's our job to manage that.

28:48: We have to manage the unit economics underneath it.

28:50: We have to manage the data center utilization.

28:52: The GPUs, and we have to do that with partners who can again adapt the data center to the changing needs of the hardware.

28:59: So long convoluted answer, but the fact is we have to think in, in, in the, in the fact that you can't build monolithic anymore.

29:08: You have to build to be adaptable and you have to be able to change, you have to be able to pivot quickly.

29:13: And that's a new dynamic for an industry that says this is good technology as long as it's 20 years old, right?

29:20: That doesn't work anymore, yeah.

29:22: Yeah, going back to, yeah, absolutely.

29:25: And Ken spot on and going back to one of his earlier comments, it's a tough problem to solve.

29:30: And for whatever mechanical and electrical topology you choose in a facility, they require tons of capital and significant design changes are a real challenge.

29:40: Based on guidance from our customers like Lambda, we're planning for what they need now, the flexibility for what they might need tomorrow, and then we have to plan ahead for known future requirements like liquid cooling too.

29:49: Just as an example, we support high density air cooled racks up to 44 kilowatts right now, and then our facilities under development in Columbus support us the flexibility for both air and liquid cooling in the future.

30:01: And then looking beyond that, our next, our next facility we're planning to rely on significant tranches that are just dedicated liquid cooling capacity.

30:11: There's still air cool requirements.

30:12: We plan to support those going forward too, but you know we need to be able to adopt all of these new technologies and plan years ahead to Ken's point.

30:21: Totally.

30:22: Well, I wanna thank both of you again for coming on the podcast today.

30:27: Really been a great discussion.

30:29: I've learned a lot.

30:30: I'll give both of you an opportunity here, for any closing points or takeaways for our audience related to, the call forth story or, you know, the, the larger AI build out just love to hear your final takeaways here.

30:45: Bill.

30:45: Please feel free.

30:47: Sure.

30:47: And first, just want to say thank you again.

30:49: I really appreciate the opportunity to join on here, share a little bit more about Cologix in Columbus specifically, and that, you know, this is, this supporting these AI workloads is a model that we're replicating across our portfolio.

31:02: So it's not just exclusive to Columbus, this is a fantastic hub.

31:05: And one that we're focused on and we've made major financial commitments for years of growth, but we're also developing those same capabilities in other major markets where Collageic has a presence to, to help our customers like Lambda grow and be able to deploy those one click clusters.

31:22: And I, and I would tell you that.

31:26: That the industrial revolution was, or excuse me, the, the information age was 75 years long, and here we are shepherding in the age of super intelligence right now, right?

31:36: The promise of the technology to fundamentally shape and form the, the human experience, it matters, and it needs the platform to do it on.

31:44: We need the infrastructure, we need to deploy GPUs, and, and we have to lower the barrier to entry for the people that have the Ideas and, and the dreams and the hopes that they want to build on this technology, right?

31:58: So we together are shepherding in a new age, the age of superintelligence.

32:04: And I think that it's it's one of those fantastic things to be able to be a part of as a company and as an industry.

32:10: And we have to do it responsibly, and we have to make this happen.

32:13: It's not a question of if AI is gonna take off, it will.

32:18: What we have to do is actually drive it, curate it, shepherd it, and make it do what it's supposed to do for humans.

32:25: I'm pretty excited about that.

32:28: Yeah, yeah.

32:29: I, I think we all are.

32:31: well, thanks again, for a great discussion.

32:36: I look forward to catching up with both of you again, down the road.

32:41: Ken, I know we'll be catching up with you at next month's data center Frontier trends summit.

32:46: You're gonna be sitting on an incredible AI panel, so there's a lot to look forward to there and Cologix is also going to be on our panels, at the event. But yeah, thanks again and thanks to Cologix and Lambda for a really great podcast talk.

33:04: I appreciate it.

33:06: You've been great, and Bill, I'll call you soon.

33:09: All right, sounds great.

33:12: Thanks everybody.

33:16: We'll see you next time on the Data Center Frontier show podcast.

Curious to test out the midwest AI factories? Deploy your own AI factory and find out.