AI is eating the world – challenges in the corporate context

“Human history is characterised by waves of innovation that change everything like an unstoppable tsunami – the agricultural revolution, the steam engine, the internet. Artificial intelligence is the next big wave, the coming wave that is rolling towards us, and we are not prepared for it.”
Mustafa Suleyman “The Coming Wave”
This is the blurb in Suleyman’s book. In his book, Suleyman, co-founder of Deepmind and Inflection AI, highlights the opportunities and risks of artificial intelligence and warns urgently against the loss of control. Technological development will present us with a choice between catastrophe and dystopia.
In practice, we are not quite that far yet. However, the issue of loss of control is already a pressing topic with the spread of generative AI available today.
Recognising challenges
Generative AI is now omnipresent. It is reported in the media, discussed among colleagues and it feels like there are new services based on it every few days. The diverse and free availability of such tools is increasingly presenting companies with challenges. While the use of these tools is quickly learnt by most employees and can certainly increase efficiency, it is not without risk.
Data protection
Both in Switzerland and in Europe, several pieces of legislation have recently been passed that require companies and authorities to actively manage their supply chains with regard to information security risks. In addition to the new data protection requirements, this also includes legislation such as the Swiss Information Security Act (ISG) and the European Directive NIS2 (Directive on measures for a high common level of cybersecurity across the Union)
Trade secrets
It is no secret that some large companies have banned the use of AI chatbots. The cases of Samsung and Apple have been discussed in the press. Both fear that employees will transmit confidential company data and that this could have negative consequences. Samsung, at least, is known to have already transmitted proprietary software code to ChatGPT in the run-up to the ban.
Little is known about the security precautions taken by the providers. And in the case of ChatGPT, there have already been several headlines that at least cast doubt on whether security is a top priority. In the past, customers’ credit card details were leaked, a bug in the API could be exploited by developers for free access and, most recently, some customers suddenly found entire chat logs of other users in their history, including sensitive data.
The concerns of Samsung, Apple and other companies that have issued similar bans therefore seem justified. However, whether such a restrictive approach is necessary is an individual consideration that each organisation must assess for itself.
Quality control
Confident demeanour with complete cluelessness. Always helps in meetings. All modern AI-supported chatbots can do this so well that many a successful management consultant or non-commissioned officer would be green with envy. Hallucinations: False statements made by AI, which often sound so competent that they are quickly perceived as the truth.
What is a challenge for AI researchers and sometimes makes for amusing newspaper articles (“AI Hallucinates: Professor fails entire class after ChatGPT falsely tells him students used AI for essay“) can also become a real problem (“ChatGPT falsely accuses US law professor of sexually harassing a student, invents a news article“).
Anyone with experience in generative AI can write a credible article about customised gene therapies without really knowing anything about synthetic biology. However, the article would probably contain errors in content and invented sources, which would sooner or later lead to critical questions. And this is precisely the problem that arises when employees use generated content directly without someone with the necessary expertise checking and, if necessary, correcting it. Scenarios in which customer relationships are permanently impaired or a company’s operations are disrupted are certainly conceivable.
Employees with Internet access can use tools to work faster, more efficiently and more effectively without any major hurdles, free of charge and sometimes even without registering. Many do not even realise that they are transferring data to a third party.
Loss of control
What was already a problem with the emergence of cloud services is now even more so with the current wave of generative AI tools. Employees with internet access can use tools to work faster, more efficiently and more effectively without any major hurdles, free of charge and sometimes even without registering. Many do not even realise that they are transmitting data to a third party. No Samsung developer without criminal energy would ever have thought of walking into Microsoft with proprietary code to discuss its content. But using a chatbot on the internet to optimise code? Why not?
Companies can train and sensitise employees. They can rely on technologies such as CASB (Cloud Access Security Broker) and DLP (Data Leakage Prevention) to prevent access to chatbots or to filter data that is entered there. However, there will never be one hundred per cent control. The almost daily growing number of AI tools in combination with the virtually endless possibilities of using confidential data in chat prompts quickly make such control endeavours seem like a Sisyphean task.
In order to guide the use of generative AI in the desired direction,
rules need to be defined.
Overcoming challenges
So what can companies do? Firstly, they should define whether and how this new technology should be used. In principle, everything is conceivable, from prohibitions to active promotion and target-orientated integration into the company’s own business processes. Ultimately, it must be in line with the company’s objectives and the management’s appetite for risk. At the end of the day, it is therefore a (not entirely simple) assessment of opportunities and risks.
Define rules
In order to guide the use of generative AI in the desired direction, rules need to be defined. These rules should be as clear as possible and communicated in a comprehensible manner. Appropriate training is often also required. In order to create clarity, the rules should refer to use cases or be easily adaptable to such cases if no general prohibition is envisaged.
Check use
If it is conducive to the implementation of the strategy, the use of technology for control is a good idea. Security solutions such as the CASB or DLP tools mentioned above can be used to enforce rules, at least to a certain extent. And even if they cannot be used to create 100% watertight security barriers, they do ensure that unintentional data leaks are minimised and that users are made aware of the defined rules in the event of breaches. At this point, it should be mentioned that for some time now there have also been products that can minimise data leaks through the use of AI.
Analysing the corresponding logs (firewall, web proxy, endpoint) can provide an overview of the use of generative AI tools and can be used for monitoring purposes if necessary.
Organisational controls can also play an important role in implementing the strategy. Regular training & information sensitises employees. Supervisors can monitor compliance with defined rules as part of their supervisory duties. And finally, implementing the need-to-know principle can also limit the potential for unwanted data leaks, and not just in connection with generative AI.
“The greed of civilisation for useful and
cheaper technologies is boundless.”
Summary
Suleyman writes: “Civilisation’s greed for useful and cheaper technologies is boundless. This will not change.” This statement is just as true in relation to humanity as a whole as it is in the microeconomic context of an individual company. And just as, according to Suleyman, civilisation should set the rules for dealing with the coming wave, every organisation should do the same with regard to its own use of artificial intelligence. This is not about banning the use of such tools out of hand. Rather, it is about formulating and implementing a strategy while weighing up the potential opportunities and risks.