No Overlord: Why AEC AI Is Heading Toward a Thousand Specialized Tools

Friday’s AUGI CON 2026 panel on Applied AI in AECO was one of the better conversations I’ve been part of in a while. Not because it was polished or scripted – it wasn’t. Jeff Thomas kept it honest from the moderator seat, the audience was live in Zoom submitting questions and reactions in real time, and the pushback came fast. That’s what you want.

If you’re an AUGI member (it’s free by the way), recordings should be coming your way soon. It is worth watching.

A panel discussion has real constraints. You’ve got a few minutes per answer, a moderator keeping the conversation moving, and three other people at the table with things to add. The data points and research behind what I said on Friday largely stayed in my notes. This post is the fuller version – the references, the numbers, and the reasoning behind the positions I took. Consider it the extended cut.

Here’s what I was actually thinking about.

The Room Was Already Past the “Should We Use AI?” Question

That was refreshing and obvious to me. The audience wasn’t debating whether AI belonged in AEC workflows. They were asking which tools actually work, where things fall apart in production, and what a real governance policy looks like.

We had three practitioners on stage with different angles on it. Nick Miller at ARKANCE is working the BIM delivery and client-adoption side. Troy White at CIMA+ has run AI-adjacent digital workflows on projects. I came in from the platform strategy and enterprise operations side, with some civil engineering specifics from running technology across a 500-person engineering firm and having been 20+ years at Autodesk in product teams having been involved in many discussions and research that are just showing up in product and feature strategy with AI.

The range made for a better panel than if we’d all been coming from the same direction. What follows is the thinking behind my answers – with the data and sources I didn’t have time to cite on the discussion.

What the Data Actually Says (And Why It’s Contradictory)

On the panel I opened with two numbers that don’t seem like they should both be true at the same time. Here’s the full context behind them.

68% of early AEC AI adopters are saving $50,000 or more per year, according to Bluebeam’s 2026 AEC Technology Outlook. And 95% of enterprise AI pilots deliver zero measurable ROI, per MIT’s NANDA Initiative.

Both are accurate. The difference between those two populations isn’t the AI tool. It’s whether the firm did the data and governance work before deploying anything.

The firms winning with AI right now started two or three years ago. They standardized their data models, cleaned up their project file structures, established what goes where. Then when AI tools arrived, they had something clean enough to query. Firms that skipped that step are finding out that AI doesn’t fix messy data – it amplifies it and that is problematic.

That’s the actual story of 2025/2026 in AEC AI adoption. Not the tools. The infrastructure and decisions behind the tools.

Where AI Is Genuinely Working in AEC Right Now

On the panel I kept the use case examples tight because we had three panelists and a moderator keeping things moving. Here’s the fuller picture with specific tools and numbers.

Document analysis is the most common first win, and it’s not glamorous, but it’s real. Searching specs, drafting RFI responses, flagging contract clauses, summarizing submittals – tools like Procore are doing this in production environments right now.

The use case I talked through on the panel was operational data analysis at scale. Take machine-generated data CERs from Autodesk product crashed that no human can process at volume – error logs, crash telemetry, performance metrics across a large user base – feed it through an AI pipeline, and let the AI find the patterns. Human decides what to fix first. That’s it. Not dramatic. We cut Autodesk product crashes 34% in 90 days doing exactly that. The math on avoided cost works out to $12 to $500 per avoided crash event depending on the interruption and data impact of the crash type and employee rate. At 500 users, that adds up fast.

The common thread across everything working is the same: high-volume, repeatable, data-structured workflows where the outcome is measurable. If your data is consistent and machine-generated, AI can find patterns in it. If it’s free-text emails and inconsistently named PDFs from four years ago, you’re going to be disappointed.

On the civil engineering side, I referenced at a high level two tools that don’t get enough attention.

InfoDrainage’s ML Deluge feature is trained on over 10,000 hydraulic simulations. It predicts flood patterns and optimizes stormwater control placement in real time, during design, without running a full hydraulic model. That’s not general AI. That’s a model trained on 10,000 prior engineering decisions.

Softree’s Path Explorer AI, part of the Softree Optimal add-on for RoadEng, does automated preliminary corridor alignment optimization on TIN terrain. Point A to Point B, constrained by max grade, no-go zones, cut and fill limits. Returns optimized route options in minutes. Documented subgrade construction cost savings of 10 to 30 percent. Two years in production. Most civil engineers outside of road and resource corridor work have never heard of it, but we used it and found a real benefit to that specific tool for large projects that Civil 3D would choke just thinking about it.

Both of those are better at their specific jobs than any general-purpose AI will be at least that is what the landscape and near term looks like. Which brings me to the main thing.

There Won’t Be One Overlord AI in AEC (hopefully not)

This was the part of the conversation I most wanted to expand on during the panel but couldn’t fully get to. The observation is simple but the implications run deep.

The AI tools that actually work in AEC are the specific task ones.

Path Explorer AI currently beats Civil 3D for corridor alignment optimization. InfoDrainage ML Deluge beats any general-purpose AI for stormwater design decisions. The pattern holds everywhere you look.

I don’t think we’re heading toward one platform that does everything well. That’s not how domain expertise works. What you get when you build for everything is a tool that does nothing great.

But here’s where it gets interesting for AEC specifically. We might not need an overlord. What we might get instead is platforms that let specialized AI tools plug in and hopefully not artificially controlled by a single vendor. Competition is good, especially in cloud and AI platforms.

Autodesk is already heading this direction. Revit 2027 shipped with a built-in MCP server – Model Context Protocol – that runs automatically whenever a project is open. Any MCP-compatible AI client can connect to the live model and read element categories, parameter values, spatial data, and project information. BIMSmith tested six real project scenarios against it: code compliance, egress analysis, solar feasibility, space diagrams, and Dynamo scripting. All six worked, with appropriate human review (human review is important).

The MCP architecture means Autodesk is building a connection layer, not a single AI that does everything. A third-party drainage AI could theoretically connect to that server. A structural analysis AI could connect. A procurement AI could connect. The platform becomes the integration point, not the intelligence.

Procore’s acquisition of Datagrid in January 2026 is pointing in the same direction. Datagrid’s whole premise is that AI should execute across fragmented systems, not just answer questions inside one. Connect the data, execute the workflow, close the loop. That’s not Procore becoming the AI. That’s Procore becoming the orchestration layer.

If that’s where this goes, and I think it is, then the competitive question for AEC firms shifts. It’s not “which AI platform do we pick?” It’s “which integration layer do we build our stack around, and which domain-specific tools do we plug into it?”

That’s a harder question and a more interesting one.

The Governance Conversation the Audience Actually Wanted

We spent a solid chunk of time on data policy during the panel, and the reaction told me this hasn’t been covered enough at industry events. I want to give it more space here than a panel format allows.

Nick Miller brought up something that stopped the room. He described a situation where someone used a free AI chat tool to review and prepare a proposal – uploading competitive business details, pricing strategy, project approach – into a consumer LLM with no enterprise protections. The concern: a competitor may have benefited from that information ending up in the training pipeline.

That’s not a hypothetical. That’s a real business risk that most firms haven’t thought through because they haven’t asked what their people are actually doing.

And here’s the thing – if you think your employees aren’t pasting important design details, client information, proposal content, and proprietary bidding methodology and language into free AI chat tools, you may be sorely mistaken. The AI dabblers and shadow users are everywhere. They’re trying to get their work done faster, which is completely reasonable, and they’re reaching for whatever tool is in front of them. A free ChatGPT account. A personal Claude subscription. Gemini on their personal Google account. None of those have enterprise data protections *unless* someone explicitly set them up that way.

Most firms have employees using consumer AI accounts for client work. ChatGPT, Gemini, and Claude’s consumer tiers all train on user data by default unless explicitly opted out. Consumer plans are governed by terms of service. Enterprise plans are governed by a Data Processing Addendum. One of those is a legal protection. The other is not.

Claude is worth flagging specifically because it was unique until recently. Before September 2025, Anthropic did not train on consumer chat data by default. That changed effective September 28, 2025. Consumer accounts – Free, Pro, and Max – now default to training data use with a five-year retention period unless you opt out in Privacy Settings. Enterprise and API accounts are unchanged and remain excluded.

A few people reacted in the when this came up. Which is exactly why it needs to be said out loud at events like this.

The practical advice is this: AI is here to stay. It will help AEC firms innovate and accelerate. Having no plan for how your people use it is the real danger – not the AI itself.

So be proactive. Have the conversation with your employees before the incident, not after it. What tools are approved and what are not. What data can go into them. What requires enterprise accounts. What needs human review before it goes anywhere near a deliverable. Some industry standard groups are suggesting 30% AI and 70% human in combination working.

And then assemble a working group. Get the right people in a room – IT, legal, practice leadership, project managers, and the people who are already using AI whether you know it or not. Build an acceptable use policy that reflects how work actually gets done at your firm, not an idealized version of it. The firms that do this now will be in a much better position than the ones who wait until something goes wrong.

The practical account-level advice I gave is the same as it is in my firm: mandate enterprise accounts for any AI tool used on client work. Not as a preference. As policy.

What I’d Tell Someone Starting This Week

Jeff asked each panelist for a practical closing takeaway. Mine was intentionally compressed to fit the format. Here’s the full version.

Pick one workflow that’s painful, high-volume, and doesn’t require professional judgment on every step. Document search is usually the right answer. RFI summarization is close behind.

Check your data first. If the data for that workflow is clean and consistent, you’re ready to try AI on it. If it’s scattered across five platforms with inconsistent naming, clean the data first. The AI will wait.

Then use the AI features already inside the tools you’re paying for. Autodesk’s 2027 products shipped with more AI built in than most firms have explored like Autodesk Assistant in Tech Preview mode. If Microsoft 365 Copilot is already in your tenant, you have an M365 subscription and your data is protected (green checkmark indicator). Procore Helix is already in your Procore instance. Buy something new only after you’ve exhausted what tools and technologies you already own.

Set a baseline before you deploy anything. Hours per task, error rates, rework volume. You cannot measure ROI without knowing where you started.

AI in AEC Is Not a Technology Problem

I said this on the panel and it’s the thing I most want to leave you with – the point I’d have spent ten minutes on if the format allowed it.

AI in AEC is not a technology problem. It’s a data problem.

AEC has been accumulating siloed, unstructured, inconsistently formatted data for decades. Project files spread across network drives, SharePoint instances, Dropbox folders, and six different project management platforms and even more formats. Specs scanned to PDF and never indexed. Survey data in proprietary formats nobody outside one firm can read. As-builts that don’t match what got built. Drawings named whatever made sense to whoever saved them in 2019 or that 1998 company CAD standard.

This is not a new problem. AI just makes it unavoidable.

When you drop an AI tool into that environment, it doesn’t organize the chaos. It queries it. And when the data is siloed and unstructured, what comes back is noise that looks like signal. Confident, fluent, authoritative-sounding noise. That’s more dangerous than no answer at all.

The firms struggling with AI right now didn’t necessarily pick the wrong tool. They picked the tool before they identified the problem, and before they looked honestly at the data sitting behind it.

The sequence matters. Find your problem first. A real one – specific, repeatable, with a measurable baseline you can track improvement against. Then look at the data for that problem. Is it clean? Consistent? Accessible? If yes, you’re ready to evaluate tools. If no, the data work comes first. The ROI from cleaning up your data infrastructure almost always exceeds the ROI from deploying AI on bad data.

Then – and only then – go find the tool. And start with what you already own.

I’ve watched firms chase AI features for months without asking what problem they were trying to solve. The technology is not the hard part. Knowing what you need it to do, and having data clean enough to do it with, is the hard part. That’s been true for every technology wave I’ve watched in this industry. AI isn’t different.

Thank You to AUGI, the Panelists, Sponsors, Volunteers, and Everyone Who Showed Up

AUGI has been doing this for a long time. Users helping users – that’s not a tagline, it’s actually how the organization operates, and AUGICon is the clearest expression of it. A practitioner-run online conference where the people on stage are the same people in the trenches every day. No vendor keynotes selling you something as the main purpose. Just honest conversations between people who use the software, with an audience asking sharp questions live in the chat.

Thank you to Jeff Thomas for moderating a discussion that stayed grounded and moved fast. Nick Miller and Troy White brought perspectives I learned from, and the panel was better for having three genuinely different angles on the same problem rather than three people agreeing with each other.

Thank you to the AUGI Board and everyone who put AUGI CON 2026 together. Online events like this don’t happen without a lot of invisible work of volunteers, and the quality of the conversation on Friday reflects it.

And to the AUGI members who joined live – the questions and reactions coming through Zoom were great. That’s what makes a panel worth doing.

Recordings should be coming your way soon.

Users helping users. Still the best model going.

FAQ

What is applied AI in AEC?
Applied AI in AEC refers to AI tools actually deployed in production workflows – document analysis, clash detection, quantity takeoff, drainage design, corridor optimization – as opposed to experimental or demonstration use. Applied means it’s running on real projects and producing measurable outcomes.

Is AI in AEC a technology problem or a data problem?
It’s a data problem. AEC firms have decades of siloed, unstructured, inconsistently formatted data across network drives, project management platforms, and proprietary formats. AI doesn’t organize that chaos – it queries it. On bad data, AI returns noise that looks like signal. Firms that succeed with AI almost always did the data infrastructure work first. The sequence is: identify the problem, assess the data, then find the tool.

What AEC AI tools are working in production today?
Document search and analysis tools (Trunk Tools, Document Crunch, Procore Helix), stormwater design AI (InfoDrainage ML Deluge), corridor alignment optimization (Softree Path Explorer AI and RoadEng), site progress monitoring (Buildots, OpenSpace), schedule risk prediction (nPlan, Alice Technologies), and Revit’s built-in MCP server enabling live model AI queries.

Why do most enterprise AI pilots fail in AEC?
85% of AI project failures trace to data quality. AEC firms typically have fragmented, inconsistently structured data spread across multiple platforms. AI amplifies bad data rather than fixing it. Firms that succeed with AI almost always standardized their data infrastructure first. AI doesn’t really fix bad and unstructured data.

What is the difference between consumer and enterprise AI accounts?
Consumer accounts at ChatGPT, Gemini, and Claude train on user data by default and are governed by terms of service. Enterprise accounts are governed by a Data Processing Addendum, which is a legal protection that excludes your firm’s data from training. For AEC firms using AI on client projects, the difference is not trivial.

What is MCP and why does it matter for AEC?
Model Context Protocol is an open standard that lets AI agents connect to external data sources and tools. Autodesk implemented an MCP server directly in Revit 2027 – it runs automatically when a project is open and lets any MCP-compatible AI read live model data. This is the architecture enabling AI from multiple specialized vendors to plug into a single platform.

Will one AI platform dominate AEC?
My read is no. Domain-specific AI tools – trained on specific engineering data for specific problems – consistently outperform general-purpose AI in their domains. What’s more likely is that major platforms (Autodesk, Trimble, Hexagon, Procore) become integration and orchestration layers that specialized AI tools plug into, rather than one platform building all the AI itself.

Cheers,

Shaan

Leave a Reply