Resolve security challenges and vendor reliance with agentic AI

Feb 27, 2025 by Josef Habdank

 

In brief

  • Ownership and rights can be contentious when AI generates content or handles proprietary information. In the case of AI-driven errors or incidents, liability often leads to complex legal battles and proving negligence or accountability can be problematic.
  • Relying on external systems can compromise the integrity of AI decisions in several ways. By self-hosting, the company has full control over the data, model training and decision-making processes, ensuring that AI decisions reflect the company's values and operational integrity.
  • DXC Technology has extensive experience in helping companies transition to self-hosted AI environments. Our global network of experts and deep understanding of traditional and cloud-based infrastructures make DXC the perfect business partner for securely designing, deploying and managing proprietary AI models.

  

Self-hosted AI is critical for security and operational integrity. This simple fact becomes more apparent with every new black-hat threat and technological advance.

Imagine a hypothetical organization called XYZ Corporation. Let’s say this innovative, highly profitable retailer sells quality furniture online and via a network of showrooms. All good so far. However, we’re not interested in what XYZ does exactly; it’s how it does it that’s important.

The secret of XYZ Corp’s success is that it integrates agentic AIs and workflows into its operational DNA.

Agentic AI systems perform tasks autonomously by analyzing data, fixing goals and devising custom workflows. They use sophisticated reasoning and iterative planning to resolve complicated issues without continual staff involvement.

 

This store does much more with a lot less, securely

Consequently, XYZ Corp is able to employ just a few hundred staff members to manage a workload that would typically occupy many thousands of employees. Its leaner, hyper-automated setup optimizes internal processes, enabling more personalized customer experiences and a rapid response to market fluctuations.

The organization’s agentic AI systems combine retrieval-augmented generation (RAG) techniques with a third-party LLM (accessed via API) and a custom prompt library to revolutionize everything from customer service to supply chain management.

However, relying on accessing external AI through an API interface introduces vulnerabilities that could undermine the company’s success.

 

So, just how secure is agentic AI?

 

Scenario 1: The knock-on effects of vendor updates

What if, in response to new ethical guidelines and compliance policies, the vendor company updates its LLM's internal system prompt, inadvertently affecting XYZ Corp’s agentic AI behavior? Due to a nuanced legal situation, the updated prompt contradicts the organization’s AI operational goals. As a result, XYZ's AI systems behave erratically, promoting out-of-stock products or suggesting business strategies that conflict with corporate social responsibility policies.

Considerations: If the AI relies on client models for decision-making, changes in vendor ethics or compliance policies can bleed through to client AI systems. Such updates might not be communicated effectively, or the implications could be misunderstood due to the complexity of AI behavior. In this case, the vendor's new ethical stance might prioritize different values or interpret laws in ways that conflict with XYZ Corp's business model or operational ethics. This misalignment can lead to AI outputs or decisions detrimental to XYZ's objectives or customer relations.

Resolution: Self-hosting gives XYZ Corp the autonomy to ensure that any AI model updates align with their specific business ethics, compliance requirements and operational goals.

 

Scenario 2: Data security, privacy and compliance

What if XYZ’s cybersecurity team detects unusual traffic patterns? An investigation reveals a data breach where customer information, including personal details and purchase histories, has been compromised. Apparently, the breach occurred through vulnerabilities in the API connecting to the vendor's LLM.

Considerations: When companies rely on vendor models via API, they expose their data to external threats. Data breaches can occur if the API is not adequately secured, allowing hackers to access or manipulate sensitive information. Insider threats also pose a significant risk; vendor employees or contractors might intentionally or accidentally leak data. Moreover, compliance with data protection regulations such as GDPR or HIPAA becomes complex. These laws often require data to be processed and stored within specific geographic boundaries, and using an external API might inadvertently send data across borders or into less secure environments.

Resolution: Self-hosting AI models keep data under direct control, reducing the risk of unauthorized access and simplifying compliance with global data protection laws.

 

Scenario 3: Legal, contractual and liability issues

What if, in an attempt to personalize marketing, XYZ’s AI system accidentally misclassifies customer data, leading to privacy violations and a class-action lawsuit? The legal challenge would turn on who’s liable — the company for using the AI or the vendor for providing it.

Considerations: AI decisions tend to muddy legal waters, especially when they lead to adverse outcomes. Contracts often fail to attribute corporate/vendor responsibilities clearly, particularly when they concern data usage, AI decision errors or intellectual property (IP) rights. Ownership and rights can prove contentious if the AI generates content or handles proprietary information. In the case of AI-driven errors or incidents, establishing liability can lead to complex legal battles where proving negligence or accountability can be taxing.

Resolution: Self-hosting clarifies ownership and liability since all AI functions are managed internally, making it easier to manage IP and comply with legal standards.

 

Scenario 4: The integrity of AI decision making

What if influenced by a competitor's cyber-attack, XYZ's AI recommends strategies that benefit competitors during a product launch? It’s confirmed that manipulation has occurred via a “man-in-the-middle” attack on the API, altering AI prompts.

Considerations: The integrity of AI decisions can be compromised in several ways when relying on external systems. Malicious actors could inject false or biased data into the system or tamper with prompts, leading to decisions that misalign with company business objectives or ethical standards. There's also the risk of model poisoning, where the training data or model itself is corrupted, leading to skewed or erroneous outputs. Ethical misalignment can occur if the vendor's model doesn’t share corporate values or practices, potentially leading to decisions that conflict with company policies or public expectations.

Resolution: By self-hosting, the company has complete control over data, model training and the decision-making processes, ensuring that AI decisions reflect the company's values and operational integrity.

 

 

Moving on

Relying on a single external vendor for all AI operations has advantages and disadvantages. While providing scalability and reducing the need for in-house expertise, it introduces significant risks that could undermine company operations, reputation and legal standing. In the long run, it could prove ruinous for mission-critical operations.

Self-hosting AI models prioritize control, security and compliance. They enable companies to maintain data sovereignty, ensure the integrity of AI decisions and navigate legal issues easier. That said, if self-hosting seems daunting, you can always engage multiple AI vendors. This mitigates the risk of vendor lock-in, reducing dependency and providing a safety net in case a vendor is compromised or ceases operations altogether.

 

DXC can help you self-host

DXC has extensive experience in infrastructure/IT services and helping companies transition to self-hosted AI environments. A global network of experts and a deep understanding of traditional and cloud-based infrastructures make DXC the perfect technology partner for securely designing, deploying and managing proprietary AI models. Our approach embraces the technical setup, ensuring systems are protected against insider threats by applying robust cybersecurity practices.

In addition, we provide ongoing support for system updates, ethical AI training and compliance with emerging regulations, ensuring that your AI infrastructure remains secure and future-proof. DXC can help you regain control of your AI operations, avoid external manipulation and maintain strategic autonomy in an increasingly AI-driven marketplace.

 

Intrigued?

To learn more about how we can help you resolve agentic AI security challenges and reduce vendor reliance, visit our website or contact us.

 

Josef Habdank , Data Engineering Global Practice Lead, researching practical AI applications

Josef Habdank author linkedin

Data Engineering Global Practice Lead, researching practical AI applications

Josef specializes in scalable data platforms and GenAI for code generation for DXC. His team manages a global data and AI portfolio, driving technical excellence and expanding market reach. With a firm commitment to ethical AI and advancing artificial general intelligence (AGI) Level 5, he empowers organizations to unlock the potential of data-driven innovation.

Josef Habdank , Data Engineering Global Practice Lead, researching practical AI applications

Josef Habdank author linkedin

Data Engineering Global Practice Lead, researching practical AI applications