top of page

Biased AI

Understanding how algorithmic bias influences decision-making, trust, and adoption of AI technologies.

One can easily notice that no AI chatbot is entirely neutral. Built on vast, imperfect datasets and shaped by human input, each model carries some degree of bias—and when used unconsciously, the potential for manipulation becomes substantial.

In many Swiss companies, AI adoption does not begin with an official strategy but with individual employees who start using publicly available chatbots like ChatGPT, Claude, or Gemini to optimize their work. This “shadow AI” spreads organically, often without the company knowing how or to what extent AI is already embedded in its processes. If these chatbots carry political, ideological, or cultural bias—as research from has shown—they can subtly shape the content produced, the information selected for decision-making, and even the personal views of the employees using them. Over time, this influence can seep into internal reports, customer communications, marketing materials, and strategic choices, embedding bias into the company’s “voice” and operations without deliberate intent. In Switzerland, where neutrality is a core part of the national brand and many industries—such as banking and pharmaceuticals—have low tolerance for reputational risk, the unmonitored use of biased AI chatbots poses a particular challenge. It acts as an invisible filter through which data, narratives, and decisions pass, shaping outcomes long before leaders realize the source of the influence.

bottom of page