If 2024 was the year of AI curiosity and 2025 was the year of experimentation, many industry observers believe 2026 may become the first true operations year for dealerships using AI. Conversations across the industry point to full AI integration moving from “nice to have” to “strategic priority.”

And it makes sense.

Dealerships using AI are seeing opportunities to improve efficiency, increase profitability, and elevate customer satisfaction in ways that simply weren’t possible a few years ago. From AI-driven lead engagement and service scheduling to analytics that sharpen pricing and inventory decisions, artificial intelligence is quickly becoming part of the modern dealership operating model.

This is a good thing.

But like every meaningful technological shift in automotive retail—from DMS evolution to cloud adoption—AI brings with it a new responsibility: reassessing best practices, especially around cybersecurity.

Why Dealerships Using AI Are Leaning In

AI isn’t being adopted just because it’s trendy. It’s being adopted because it works.

Forward-thinking dealers are leveraging AI to:

  • Respond instantly to online leads and increase conversion rates
  • Automate service appointment booking and reduce phone bottlenecks
  • Analyze customer behavior to personalize marketing
  • Improve inventory forecasting and margin optimization
  • Streamline internal workflows and reduce administrative burden

The potential upside is significant:

  • Higher gross profit through smarter pricing and marketing
  • Improved CSI through faster, more consistent communication
  • Stronger customer loyalty driven by convenience and personalization
  • Operational efficiency that offsets rising labor and overhead costs

In short, AI can help dealerships do more with the same—or even fewer—resources.

And in a market where margins are tighter and competition is relentless, that matters.

The Other Side of the Coin: AI Expands the Attack Surface

Here’s where perspective becomes important.

AI itself is not the risk. Poorly governed AI is.

When dealerships introduce AI into operations, they are typically introducing:

  • New integrations with CRM, DMS, phone systems, or Microsoft 365
  • Additional data flows involving customer PII and deal data
  • API connections to third-party AI vendors
  • Automated decision-making processes

Each of these expands the dealership’s digital footprint—what cybersecurity professionals call the attack surface.

According to joint guidance from NSA, CISA, and international cyber authorities on AI data security, protecting the data used to train and operate AI systems is critical to ensuring the integrity and reliability of AI outcomes. If that data is tampered with, corrupted, or exposed, the AI system’s outputs can be compromised.

In practical dealership terms, that means:

  • Customer data could be exposed.
  • AI-driven communications could be manipulated.
  • Automated workflows could be disrupted.
  • Regulatory and compliance exposure could increase.

None of this is inevitable—but it is possible if AI adoption outpaces governance.

AI Risk Is Different from Traditional IT Risk

One of the key insights from the NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0) is that AI risk is not identical to traditional software risk. AI systems are dynamic, data-driven, and often influenced by changing inputs over time.

In other words:

  • AI models learn from data.
  • Data changes.
  • Systems evolve.
  • Risk shifts.

NIST outlines characteristics of trustworthy AI systems as being valid, reliable, secure, resilient, accountable, transparent, privacy-enhanced, and fair.

For dealership management, this translates to a simple reality:

AI governance is not a one-time IT project. It is an ongoing management discipline.

A New Day Requires a New Assessment

Many dealerships already have strong cybersecurity programs in place. They’ve invested in firewalls, endpoint security, backups, MFA, and compliance programs.

That foundation is valuable.

But as dealerships using AI expand automation and data integration, it’s prudent—not alarmist—to ask a few updated questions:

  • Do we know exactly what data our AI tools access and store?
  • Have we reviewed vendor security practices and data retention policies?
  • Are access controls properly limited to least privilege?
  • Are AI integrations monitored like other critical systems?
  • Is AI risk integrated into our broader cybersecurity and compliance program?

The goal isn’t to slow innovation. It’s to support it responsibly.