Red Hat

AI is becoming the new foundation for businesses. That’s why Red Hat is committed to enabling every organization to run any AI model on any hardware and in any IT environment — fully based on open source technologies (such as the Linux operating system) and with digital sovereignty. To achieve this, Red Hat works closely with a wide range of partners across the AI ecosystem.

The open source approach plays a particularly important role for Red Hat: it ensures openness and transparency, and guarantees that companies maintain full control over their data, AI models, and infrastructure. It also helps avoid dependencies on proprietary vendors.

CANCOM provides comprehensive support to help you successfully implement and leverage Red Hat’s AI solutions within your organization.

2026 red hat partner program tier premier
cancom_b_redhat

Our partnership with Red Hat 

As a Premier Partner in the Red Hat Ecosystem Community, CANCOM advises and supports organizations in the secure and reliable deployment of Red Hat’s AI solutions. In addition, CANCOM maintains strategic partnerships with all recommended Red Hat infrastructure and cloud partners — and, drawing on extensive expertise, brings all solutions together. The result: CANCOM equips your IT infrastructure for AI.

Service portfolio

In general, Red Hat’s AI solutions can be divided into three categories, with the scope of services in each category building on the previous one.

  • Red Hat AI Inference Server
    Based on vLLM, the inference server ensures optimal GPU utilization, reduces response times, and improves AI model inference for leading open‑source AI applications in hybrid IT environments. This enables faster and more cost‑efficient deployment of AI models. When combined with LLM Compressor, AI inference efficiency is further increased without compromising performance.

  • Red Hat Enterprise Linux AI (including Red Hat AI Inference Server)
    This platform enables large‑scale, consistent execution of LLMs in individual server environments. Features such as image mode ensure reliable operation. In addition, identical security profiles can be applied across the entire Linux environment, bringing different teams together within a single workflow.

  • Red Hat OpenShift AI (including Red Hat AI Inference Server and Enterprise Linux AI)
    Built on Red Hat OpenShift, this platform enables large‑scale lifecycle management of generative and predictive AI models. Organizations can use it to develop, deploy, and manage sovereign and private AI models and agents across a wide range of hybrid cloud environments.

News & further information

Solution Approach for Productive AI

Learn in the German-language article, how organizations can implement their AI strategy flexibly, securely, and with full control in their own data center using Red Hat’s AI solutions.

Productive AI – Sovereign & Scalable

Read the German‑language information sheet to learn how you can implement a sovereign AI platform with Red Hat and CANCOM – one that meets the highest requirements for data sovereignty, security, and efficiency.